00:00:00.000 Started by upstream project "autotest-per-patch" build number 131823 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.054 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.016 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.027 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.037 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:04.037 > git config core.sparsecheckout # timeout=10 00:00:04.047 > git read-tree -mu HEAD # timeout=10 00:00:04.063 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:04.086 Commit message: "packer: Fix typo in a package name" 00:00:04.086 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:04.190 [Pipeline] Start of Pipeline 00:00:04.203 [Pipeline] library 00:00:04.205 Loading library shm_lib@master 00:00:04.205 Library shm_lib@master is cached. Copying from home. 00:00:04.225 [Pipeline] node 00:00:04.238 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.240 [Pipeline] { 00:00:04.250 [Pipeline] catchError 00:00:04.251 [Pipeline] { 00:00:04.265 [Pipeline] wrap 00:00:04.275 [Pipeline] { 00:00:04.284 [Pipeline] stage 00:00:04.286 [Pipeline] { (Prologue) 00:00:04.304 [Pipeline] echo 00:00:04.305 Node: VM-host-WFP7 00:00:04.311 [Pipeline] cleanWs 00:00:04.323 [WS-CLEANUP] Deleting project workspace... 00:00:04.323 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.329 [WS-CLEANUP] done 00:00:04.482 [Pipeline] setCustomBuildProperty 00:00:04.550 [Pipeline] httpRequest 00:00:05.060 [Pipeline] echo 00:00:05.061 Sorcerer 10.211.164.101 is alive 00:00:05.068 [Pipeline] retry 00:00:05.069 [Pipeline] { 00:00:05.082 [Pipeline] httpRequest 00:00:05.087 HttpMethod: GET 00:00:05.087 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.088 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.088 Response Code: HTTP/1.1 200 OK 00:00:05.089 Success: Status code 200 is in the accepted range: 200,404 00:00:05.089 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.361 [Pipeline] } 00:00:05.382 [Pipeline] // retry 00:00:05.389 [Pipeline] sh 00:00:05.676 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.689 [Pipeline] httpRequest 00:00:06.929 [Pipeline] echo 00:00:06.930 Sorcerer 10.211.164.101 is alive 00:00:06.937 [Pipeline] retry 00:00:06.938 [Pipeline] { 00:00:06.949 [Pipeline] httpRequest 00:00:06.953 HttpMethod: GET 00:00:06.953 URL: http://10.211.164.101/packages/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:06.954 Sending request to url: http://10.211.164.101/packages/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:06.954 Response Code: HTTP/1.1 200 OK 00:00:06.955 Success: Status code 200 is in the accepted range: 200,404 00:00:06.955 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:29.752 [Pipeline] } 00:00:29.772 [Pipeline] // retry 00:00:29.781 [Pipeline] sh 00:00:30.070 + tar --no-same-owner -xf spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:32.626 [Pipeline] sh 00:00:32.912 + git -C spdk log --oneline -n5 00:00:32.912 e83d2213a bdev: Add spdk_bdev_io_to_ctx 00:00:32.912 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:00:32.912 40c9acf6d env: add spdk_mem_get_numa_id 00:00:32.912 0f99ab2fa thread: allocate iobuf memory based on numa_id 00:00:32.912 2ef611c19 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:00:32.933 [Pipeline] writeFile 00:00:32.950 [Pipeline] sh 00:00:33.236 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:33.249 [Pipeline] sh 00:00:33.533 + cat autorun-spdk.conf 00:00:33.533 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.533 SPDK_RUN_ASAN=1 00:00:33.533 SPDK_RUN_UBSAN=1 00:00:33.533 SPDK_TEST_RAID=1 00:00:33.533 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.542 RUN_NIGHTLY=0 00:00:33.544 [Pipeline] } 00:00:33.556 [Pipeline] // stage 00:00:33.570 [Pipeline] stage 00:00:33.573 [Pipeline] { (Run VM) 00:00:33.585 [Pipeline] sh 00:00:33.872 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:33.872 + echo 'Start stage prepare_nvme.sh' 00:00:33.872 Start stage prepare_nvme.sh 00:00:33.872 + [[ -n 5 ]] 00:00:33.872 + disk_prefix=ex5 00:00:33.872 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:33.872 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:33.872 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:33.872 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.872 ++ SPDK_RUN_ASAN=1 00:00:33.872 ++ SPDK_RUN_UBSAN=1 00:00:33.872 ++ SPDK_TEST_RAID=1 00:00:33.872 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.873 ++ RUN_NIGHTLY=0 00:00:33.873 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:33.873 + nvme_files=() 00:00:33.873 + declare -A nvme_files 00:00:33.873 + backend_dir=/var/lib/libvirt/images/backends 00:00:33.873 + nvme_files['nvme.img']=5G 00:00:33.873 + nvme_files['nvme-cmb.img']=5G 00:00:33.873 + nvme_files['nvme-multi0.img']=4G 00:00:33.873 + nvme_files['nvme-multi1.img']=4G 00:00:33.873 + nvme_files['nvme-multi2.img']=4G 00:00:33.873 + nvme_files['nvme-openstack.img']=8G 00:00:33.873 + nvme_files['nvme-zns.img']=5G 00:00:33.873 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:33.873 + (( SPDK_TEST_FTL == 1 )) 00:00:33.873 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:33.873 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:33.873 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.873 + for nvme in "${!nvme_files[@]}" 00:00:33.873 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:34.813 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.813 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:34.813 + echo 'End stage prepare_nvme.sh' 00:00:34.813 End stage prepare_nvme.sh 00:00:34.825 [Pipeline] sh 00:00:35.109 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:35.109 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:35.109 00:00:35.109 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:35.109 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:35.109 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:35.109 HELP=0 00:00:35.109 DRY_RUN=0 00:00:35.109 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:35.109 NVME_DISKS_TYPE=nvme,nvme, 00:00:35.109 NVME_AUTO_CREATE=0 00:00:35.109 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:35.109 NVME_CMB=,, 00:00:35.109 NVME_PMR=,, 00:00:35.109 NVME_ZNS=,, 00:00:35.109 NVME_MS=,, 00:00:35.109 NVME_FDP=,, 00:00:35.109 SPDK_VAGRANT_DISTRO=fedora39 00:00:35.109 SPDK_VAGRANT_VMCPU=10 00:00:35.109 SPDK_VAGRANT_VMRAM=12288 00:00:35.109 SPDK_VAGRANT_PROVIDER=libvirt 00:00:35.109 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:35.109 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:35.109 SPDK_OPENSTACK_NETWORK=0 00:00:35.109 VAGRANT_PACKAGE_BOX=0 00:00:35.109 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:35.109 FORCE_DISTRO=true 00:00:35.109 VAGRANT_BOX_VERSION= 00:00:35.109 EXTRA_VAGRANTFILES= 00:00:35.109 NIC_MODEL=virtio 00:00:35.109 00:00:35.109 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:35.109 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:37.019 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.279 ==> default: Creating image (snapshot of base box volume). 00:00:37.539 ==> default: Creating domain with the following settings... 00:00:37.539 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729878115_7da5789197de79b3456e 00:00:37.539 ==> default: -- Domain type: kvm 00:00:37.539 ==> default: -- Cpus: 10 00:00:37.539 ==> default: -- Feature: acpi 00:00:37.539 ==> default: -- Feature: apic 00:00:37.539 ==> default: -- Feature: pae 00:00:37.539 ==> default: -- Memory: 12288M 00:00:37.539 ==> default: -- Memory Backing: hugepages: 00:00:37.539 ==> default: -- Management MAC: 00:00:37.539 ==> default: -- Loader: 00:00:37.539 ==> default: -- Nvram: 00:00:37.539 ==> default: -- Base box: spdk/fedora39 00:00:37.539 ==> default: -- Storage pool: default 00:00:37.539 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729878115_7da5789197de79b3456e.img (20G) 00:00:37.539 ==> default: -- Volume Cache: default 00:00:37.539 ==> default: -- Kernel: 00:00:37.539 ==> default: -- Initrd: 00:00:37.539 ==> default: -- Graphics Type: vnc 00:00:37.539 ==> default: -- Graphics Port: -1 00:00:37.539 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.539 ==> default: -- Graphics Password: Not defined 00:00:37.539 ==> default: -- Video Type: cirrus 00:00:37.539 ==> default: -- Video VRAM: 9216 00:00:37.539 ==> default: -- Sound Type: 00:00:37.539 ==> default: -- Keymap: en-us 00:00:37.539 ==> default: -- TPM Path: 00:00:37.539 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.539 ==> default: -- Command line args: 00:00:37.539 ==> default: -> value=-device, 00:00:37.539 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.539 ==> default: -> value=-drive, 00:00:37.539 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:37.539 ==> default: -> value=-device, 00:00:37.539 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.539 ==> default: -> value=-device, 00:00:37.539 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:37.539 ==> default: -> value=-drive, 00:00:37.539 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:37.540 ==> default: -> value=-device, 00:00:37.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.540 ==> default: -> value=-drive, 00:00:37.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:37.540 ==> default: -> value=-device, 00:00:37.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.540 ==> default: -> value=-drive, 00:00:37.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:37.540 ==> default: -> value=-device, 00:00:37.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.540 ==> default: Creating shared folders metadata... 00:00:37.540 ==> default: Starting domain. 00:00:38.922 ==> default: Waiting for domain to get an IP address... 00:00:57.025 ==> default: Waiting for SSH to become available... 00:00:57.025 ==> default: Configuring and enabling network interfaces... 00:01:02.305 default: SSH address: 192.168.121.199:22 00:01:02.305 default: SSH username: vagrant 00:01:02.305 default: SSH auth method: private key 00:01:04.859 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:13.006 ==> default: Mounting SSHFS shared folder... 00:01:14.918 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.918 ==> default: Checking Mount.. 00:01:16.824 ==> default: Folder Successfully Mounted! 00:01:16.824 ==> default: Running provisioner: file... 00:01:17.766 default: ~/.gitconfig => .gitconfig 00:01:18.025 00:01:18.025 SUCCESS! 00:01:18.025 00:01:18.025 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:18.025 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:18.025 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:18.025 00:01:18.036 [Pipeline] } 00:01:18.053 [Pipeline] // stage 00:01:18.062 [Pipeline] dir 00:01:18.062 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:18.064 [Pipeline] { 00:01:18.076 [Pipeline] catchError 00:01:18.078 [Pipeline] { 00:01:18.091 [Pipeline] sh 00:01:18.376 + vagrant ssh-config --host vagrant 00:01:18.376 + sed -ne /^Host/,$p 00:01:18.376 + tee ssh_conf 00:01:20.916 Host vagrant 00:01:20.916 HostName 192.168.121.199 00:01:20.916 User vagrant 00:01:20.916 Port 22 00:01:20.916 UserKnownHostsFile /dev/null 00:01:20.916 StrictHostKeyChecking no 00:01:20.916 PasswordAuthentication no 00:01:20.916 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.916 IdentitiesOnly yes 00:01:20.916 LogLevel FATAL 00:01:20.916 ForwardAgent yes 00:01:20.916 ForwardX11 yes 00:01:20.916 00:01:20.930 [Pipeline] withEnv 00:01:20.932 [Pipeline] { 00:01:20.945 [Pipeline] sh 00:01:21.229 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.229 source /etc/os-release 00:01:21.229 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.229 # Minimal, systemd-like check. 00:01:21.229 if [[ -e /.dockerenv ]]; then 00:01:21.229 # Clear garbage from the node's name: 00:01:21.229 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.229 # $HOSTNAME is the actual container id 00:01:21.229 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.229 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:21.229 # We can assume this is a mount from a host where container is running, 00:01:21.229 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.229 container="$(< /etc/hostname) ($agent)" 00:01:21.229 else 00:01:21.229 # Fallback 00:01:21.229 container=$agent 00:01:21.229 fi 00:01:21.229 fi 00:01:21.229 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.229 00:01:21.502 [Pipeline] } 00:01:21.519 [Pipeline] // withEnv 00:01:21.527 [Pipeline] setCustomBuildProperty 00:01:21.545 [Pipeline] stage 00:01:21.547 [Pipeline] { (Tests) 00:01:21.566 [Pipeline] sh 00:01:21.855 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.131 [Pipeline] sh 00:01:22.416 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:22.689 [Pipeline] timeout 00:01:22.689 Timeout set to expire in 1 hr 30 min 00:01:22.691 [Pipeline] { 00:01:22.704 [Pipeline] sh 00:01:22.988 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:23.558 HEAD is now at e83d2213a bdev: Add spdk_bdev_io_to_ctx 00:01:23.571 [Pipeline] sh 00:01:23.856 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.185 [Pipeline] sh 00:01:24.469 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:24.746 [Pipeline] sh 00:01:25.030 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:25.291 ++ readlink -f spdk_repo 00:01:25.291 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.291 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.291 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.291 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.291 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.291 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.291 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.291 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:25.291 + cd /home/vagrant/spdk_repo 00:01:25.291 + source /etc/os-release 00:01:25.291 ++ NAME='Fedora Linux' 00:01:25.291 ++ VERSION='39 (Cloud Edition)' 00:01:25.291 ++ ID=fedora 00:01:25.291 ++ VERSION_ID=39 00:01:25.291 ++ VERSION_CODENAME= 00:01:25.291 ++ PLATFORM_ID=platform:f39 00:01:25.291 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.291 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.291 ++ LOGO=fedora-logo-icon 00:01:25.291 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.291 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.291 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.291 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.291 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.291 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.291 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.291 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.291 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.291 ++ SUPPORT_END=2024-11-12 00:01:25.291 ++ VARIANT='Cloud Edition' 00:01:25.291 ++ VARIANT_ID=cloud 00:01:25.291 + uname -a 00:01:25.291 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.291 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.861 Hugepages 00:01:25.861 node hugesize free / total 00:01:25.861 node0 1048576kB 0 / 0 00:01:25.861 node0 2048kB 0 / 0 00:01:25.861 00:01:25.861 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.861 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:25.861 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:25.861 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.123 + rm -f /tmp/spdk-ld-path 00:01:26.123 + source autorun-spdk.conf 00:01:26.123 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.123 ++ SPDK_RUN_ASAN=1 00:01:26.123 ++ SPDK_RUN_UBSAN=1 00:01:26.123 ++ SPDK_TEST_RAID=1 00:01:26.123 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.123 ++ RUN_NIGHTLY=0 00:01:26.123 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.123 + [[ -n '' ]] 00:01:26.123 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.123 + for M in /var/spdk/build-*-manifest.txt 00:01:26.123 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.123 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.123 + for M in /var/spdk/build-*-manifest.txt 00:01:26.123 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.123 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.123 + for M in /var/spdk/build-*-manifest.txt 00:01:26.123 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.123 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.123 ++ uname 00:01:26.123 + [[ Linux == \L\i\n\u\x ]] 00:01:26.123 + sudo dmesg -T 00:01:26.123 + sudo dmesg --clear 00:01:26.123 + dmesg_pid=5419 00:01:26.123 + [[ Fedora Linux == FreeBSD ]] 00:01:26.123 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.123 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.123 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.123 + sudo dmesg -Tw 00:01:26.123 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.123 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.123 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.123 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.123 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.123 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.123 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.123 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.123 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.123 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.123 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.123 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.123 Test configuration: 00:01:26.123 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.123 SPDK_RUN_ASAN=1 00:01:26.123 SPDK_RUN_UBSAN=1 00:01:26.123 SPDK_TEST_RAID=1 00:01:26.123 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.384 RUN_NIGHTLY=0 17:42:44 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:01:26.384 17:42:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.384 17:42:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.384 17:42:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.384 17:42:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.384 17:42:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.384 17:42:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.384 17:42:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.384 17:42:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.384 17:42:44 -- paths/export.sh@5 -- $ export PATH 00:01:26.384 17:42:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.384 17:42:44 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:26.384 17:42:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:26.384 17:42:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729878164.XXXXXX 00:01:26.384 17:42:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729878164.PScZWw 00:01:26.384 17:42:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:26.384 17:42:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:26.384 17:42:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:26.384 17:42:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:26.384 17:42:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.384 17:42:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:26.384 17:42:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:26.384 17:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.384 17:42:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:26.384 17:42:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:26.384 17:42:44 -- pm/common@17 -- $ local monitor 00:01:26.384 17:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.384 17:42:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.384 17:42:44 -- pm/common@25 -- $ sleep 1 00:01:26.384 17:42:44 -- pm/common@21 -- $ date +%s 00:01:26.384 17:42:44 -- pm/common@21 -- $ date +%s 00:01:26.384 17:42:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729878164 00:01:26.384 17:42:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729878164 00:01:26.384 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729878164_collect-vmstat.pm.log 00:01:26.384 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729878164_collect-cpu-load.pm.log 00:01:27.325 17:42:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:27.325 17:42:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.325 17:42:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.325 17:42:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.325 17:42:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.325 Fri Oct 25 05:42:45 PM UTC 2024 00:01:27.325 17:42:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.325 v25.01-pre-118-ge83d2213a 00:01:27.325 17:42:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.325 17:42:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.325 17:42:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:27.325 17:42:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:27.325 17:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.325 ************************************ 00:01:27.325 START TEST asan 00:01:27.325 ************************************ 00:01:27.325 using asan 00:01:27.326 17:42:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:27.326 00:01:27.326 real 0m0.000s 00:01:27.326 user 0m0.000s 00:01:27.326 sys 0m0.000s 00:01:27.326 17:42:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:27.326 17:42:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.326 ************************************ 00:01:27.326 END TEST asan 00:01:27.326 ************************************ 00:01:27.586 17:42:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.586 17:42:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.586 17:42:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:27.586 17:42:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:27.586 17:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.586 ************************************ 00:01:27.586 START TEST ubsan 00:01:27.586 ************************************ 00:01:27.586 using ubsan 00:01:27.586 17:42:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:27.586 00:01:27.586 real 0m0.000s 00:01:27.586 user 0m0.000s 00:01:27.586 sys 0m0.000s 00:01:27.586 17:42:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:27.586 ************************************ 00:01:27.586 17:42:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.586 END TEST ubsan 00:01:27.586 ************************************ 00:01:27.586 17:42:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.586 17:42:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.586 17:42:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.586 17:42:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:27.586 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.586 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.156 Using 'verbs' RDMA provider 00:01:47.212 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:02.127 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:02.127 Creating mk/config.mk...done. 00:02:02.127 Creating mk/cc.flags.mk...done. 00:02:02.127 Type 'make' to build. 00:02:02.127 17:43:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:02.127 17:43:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:02.127 17:43:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:02.127 17:43:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.127 ************************************ 00:02:02.127 START TEST make 00:02:02.127 ************************************ 00:02:02.127 17:43:19 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:02.127 make[1]: Nothing to be done for 'all'. 00:02:12.151 The Meson build system 00:02:12.151 Version: 1.5.0 00:02:12.151 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:12.151 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:12.151 Build type: native build 00:02:12.151 Program cat found: YES (/usr/bin/cat) 00:02:12.151 Project name: DPDK 00:02:12.151 Project version: 24.03.0 00:02:12.151 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.151 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.151 Host machine cpu family: x86_64 00:02:12.151 Host machine cpu: x86_64 00:02:12.151 Message: ## Building in Developer Mode ## 00:02:12.151 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.151 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.151 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.152 Program python3 found: YES (/usr/bin/python3) 00:02:12.152 Program cat found: YES (/usr/bin/cat) 00:02:12.152 Compiler for C supports arguments -march=native: YES 00:02:12.152 Checking for size of "void *" : 8 00:02:12.152 Checking for size of "void *" : 8 (cached) 00:02:12.152 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.152 Library m found: YES 00:02:12.152 Library numa found: YES 00:02:12.152 Has header "numaif.h" : YES 00:02:12.152 Library fdt found: NO 00:02:12.152 Library execinfo found: NO 00:02:12.152 Has header "execinfo.h" : YES 00:02:12.152 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.152 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.152 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.152 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.152 Run-time dependency openssl found: YES 3.1.1 00:02:12.152 Run-time dependency libpcap found: YES 1.10.4 00:02:12.152 Has header "pcap.h" with dependency libpcap: YES 00:02:12.152 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.152 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.152 Compiler for C supports arguments -Wformat: YES 00:02:12.152 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.152 Compiler for C supports arguments -Wformat-security: NO 00:02:12.152 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.152 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.152 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.152 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.152 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.152 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.152 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.152 Compiler for C supports arguments -Wundef: YES 00:02:12.152 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.152 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.152 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.152 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.152 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.152 Program objdump found: YES (/usr/bin/objdump) 00:02:12.152 Compiler for C supports arguments -mavx512f: YES 00:02:12.152 Checking if "AVX512 checking" compiles: YES 00:02:12.152 Fetching value of define "__SSE4_2__" : 1 00:02:12.152 Fetching value of define "__AES__" : 1 00:02:12.152 Fetching value of define "__AVX__" : 1 00:02:12.152 Fetching value of define "__AVX2__" : 1 00:02:12.152 Fetching value of define "__AVX512BW__" : 1 00:02:12.152 Fetching value of define "__AVX512CD__" : 1 00:02:12.152 Fetching value of define "__AVX512DQ__" : 1 00:02:12.152 Fetching value of define "__AVX512F__" : 1 00:02:12.152 Fetching value of define "__AVX512VL__" : 1 00:02:12.152 Fetching value of define "__PCLMUL__" : 1 00:02:12.152 Fetching value of define "__RDRND__" : 1 00:02:12.152 Fetching value of define "__RDSEED__" : 1 00:02:12.152 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.152 Fetching value of define "__znver1__" : (undefined) 00:02:12.152 Fetching value of define "__znver2__" : (undefined) 00:02:12.152 Fetching value of define "__znver3__" : (undefined) 00:02:12.152 Fetching value of define "__znver4__" : (undefined) 00:02:12.152 Library asan found: YES 00:02:12.152 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.152 Message: lib/log: Defining dependency "log" 00:02:12.152 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.152 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.152 Library rt found: YES 00:02:12.152 Checking for function "getentropy" : NO 00:02:12.152 Message: lib/eal: Defining dependency "eal" 00:02:12.152 Message: lib/ring: Defining dependency "ring" 00:02:12.152 Message: lib/rcu: Defining dependency "rcu" 00:02:12.152 Message: lib/mempool: Defining dependency "mempool" 00:02:12.152 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.152 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.152 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.152 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.152 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.152 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.152 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.152 Compiler for C supports arguments -mpclmul: YES 00:02:12.152 Compiler for C supports arguments -maes: YES 00:02:12.152 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.152 Compiler for C supports arguments -mavx512bw: YES 00:02:12.152 Compiler for C supports arguments -mavx512dq: YES 00:02:12.152 Compiler for C supports arguments -mavx512vl: YES 00:02:12.152 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.152 Compiler for C supports arguments -mavx2: YES 00:02:12.152 Compiler for C supports arguments -mavx: YES 00:02:12.152 Message: lib/net: Defining dependency "net" 00:02:12.152 Message: lib/meter: Defining dependency "meter" 00:02:12.152 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.152 Message: lib/pci: Defining dependency "pci" 00:02:12.152 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.152 Message: lib/hash: Defining dependency "hash" 00:02:12.152 Message: lib/timer: Defining dependency "timer" 00:02:12.152 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.152 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.152 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.152 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.152 Message: lib/power: Defining dependency "power" 00:02:12.152 Message: lib/reorder: Defining dependency "reorder" 00:02:12.152 Message: lib/security: Defining dependency "security" 00:02:12.152 Has header "linux/userfaultfd.h" : YES 00:02:12.152 Has header "linux/vduse.h" : YES 00:02:12.152 Message: lib/vhost: Defining dependency "vhost" 00:02:12.152 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.152 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.152 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.152 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.152 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.152 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.152 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.152 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.152 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.152 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.152 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.152 Configuring doxy-api-html.conf using configuration 00:02:12.152 Configuring doxy-api-man.conf using configuration 00:02:12.152 Program mandb found: YES (/usr/bin/mandb) 00:02:12.152 Program sphinx-build found: NO 00:02:12.152 Configuring rte_build_config.h using configuration 00:02:12.152 Message: 00:02:12.152 ================= 00:02:12.152 Applications Enabled 00:02:12.152 ================= 00:02:12.152 00:02:12.152 apps: 00:02:12.152 00:02:12.152 00:02:12.152 Message: 00:02:12.152 ================= 00:02:12.152 Libraries Enabled 00:02:12.152 ================= 00:02:12.152 00:02:12.152 libs: 00:02:12.152 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.152 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.152 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.152 00:02:12.152 Message: 00:02:12.152 =============== 00:02:12.152 Drivers Enabled 00:02:12.152 =============== 00:02:12.152 00:02:12.152 common: 00:02:12.152 00:02:12.152 bus: 00:02:12.152 pci, vdev, 00:02:12.152 mempool: 00:02:12.152 ring, 00:02:12.152 dma: 00:02:12.152 00:02:12.152 net: 00:02:12.152 00:02:12.152 crypto: 00:02:12.152 00:02:12.152 compress: 00:02:12.152 00:02:12.152 vdpa: 00:02:12.152 00:02:12.152 00:02:12.152 Message: 00:02:12.152 ================= 00:02:12.152 Content Skipped 00:02:12.152 ================= 00:02:12.152 00:02:12.152 apps: 00:02:12.152 dumpcap: explicitly disabled via build config 00:02:12.152 graph: explicitly disabled via build config 00:02:12.152 pdump: explicitly disabled via build config 00:02:12.152 proc-info: explicitly disabled via build config 00:02:12.152 test-acl: explicitly disabled via build config 00:02:12.152 test-bbdev: explicitly disabled via build config 00:02:12.152 test-cmdline: explicitly disabled via build config 00:02:12.152 test-compress-perf: explicitly disabled via build config 00:02:12.152 test-crypto-perf: explicitly disabled via build config 00:02:12.152 test-dma-perf: explicitly disabled via build config 00:02:12.152 test-eventdev: explicitly disabled via build config 00:02:12.152 test-fib: explicitly disabled via build config 00:02:12.152 test-flow-perf: explicitly disabled via build config 00:02:12.152 test-gpudev: explicitly disabled via build config 00:02:12.152 test-mldev: explicitly disabled via build config 00:02:12.152 test-pipeline: explicitly disabled via build config 00:02:12.152 test-pmd: explicitly disabled via build config 00:02:12.152 test-regex: explicitly disabled via build config 00:02:12.152 test-sad: explicitly disabled via build config 00:02:12.152 test-security-perf: explicitly disabled via build config 00:02:12.152 00:02:12.152 libs: 00:02:12.152 argparse: explicitly disabled via build config 00:02:12.152 metrics: explicitly disabled via build config 00:02:12.152 acl: explicitly disabled via build config 00:02:12.152 bbdev: explicitly disabled via build config 00:02:12.152 bitratestats: explicitly disabled via build config 00:02:12.152 bpf: explicitly disabled via build config 00:02:12.152 cfgfile: explicitly disabled via build config 00:02:12.152 distributor: explicitly disabled via build config 00:02:12.152 efd: explicitly disabled via build config 00:02:12.152 eventdev: explicitly disabled via build config 00:02:12.152 dispatcher: explicitly disabled via build config 00:02:12.152 gpudev: explicitly disabled via build config 00:02:12.152 gro: explicitly disabled via build config 00:02:12.152 gso: explicitly disabled via build config 00:02:12.152 ip_frag: explicitly disabled via build config 00:02:12.152 jobstats: explicitly disabled via build config 00:02:12.152 latencystats: explicitly disabled via build config 00:02:12.152 lpm: explicitly disabled via build config 00:02:12.152 member: explicitly disabled via build config 00:02:12.152 pcapng: explicitly disabled via build config 00:02:12.152 rawdev: explicitly disabled via build config 00:02:12.153 regexdev: explicitly disabled via build config 00:02:12.153 mldev: explicitly disabled via build config 00:02:12.153 rib: explicitly disabled via build config 00:02:12.153 sched: explicitly disabled via build config 00:02:12.153 stack: explicitly disabled via build config 00:02:12.153 ipsec: explicitly disabled via build config 00:02:12.153 pdcp: explicitly disabled via build config 00:02:12.153 fib: explicitly disabled via build config 00:02:12.153 port: explicitly disabled via build config 00:02:12.153 pdump: explicitly disabled via build config 00:02:12.153 table: explicitly disabled via build config 00:02:12.153 pipeline: explicitly disabled via build config 00:02:12.153 graph: explicitly disabled via build config 00:02:12.153 node: explicitly disabled via build config 00:02:12.153 00:02:12.153 drivers: 00:02:12.153 common/cpt: not in enabled drivers build config 00:02:12.153 common/dpaax: not in enabled drivers build config 00:02:12.153 common/iavf: not in enabled drivers build config 00:02:12.153 common/idpf: not in enabled drivers build config 00:02:12.153 common/ionic: not in enabled drivers build config 00:02:12.153 common/mvep: not in enabled drivers build config 00:02:12.153 common/octeontx: not in enabled drivers build config 00:02:12.153 bus/auxiliary: not in enabled drivers build config 00:02:12.153 bus/cdx: not in enabled drivers build config 00:02:12.153 bus/dpaa: not in enabled drivers build config 00:02:12.153 bus/fslmc: not in enabled drivers build config 00:02:12.153 bus/ifpga: not in enabled drivers build config 00:02:12.153 bus/platform: not in enabled drivers build config 00:02:12.153 bus/uacce: not in enabled drivers build config 00:02:12.153 bus/vmbus: not in enabled drivers build config 00:02:12.153 common/cnxk: not in enabled drivers build config 00:02:12.153 common/mlx5: not in enabled drivers build config 00:02:12.153 common/nfp: not in enabled drivers build config 00:02:12.153 common/nitrox: not in enabled drivers build config 00:02:12.153 common/qat: not in enabled drivers build config 00:02:12.153 common/sfc_efx: not in enabled drivers build config 00:02:12.153 mempool/bucket: not in enabled drivers build config 00:02:12.153 mempool/cnxk: not in enabled drivers build config 00:02:12.153 mempool/dpaa: not in enabled drivers build config 00:02:12.153 mempool/dpaa2: not in enabled drivers build config 00:02:12.153 mempool/octeontx: not in enabled drivers build config 00:02:12.153 mempool/stack: not in enabled drivers build config 00:02:12.153 dma/cnxk: not in enabled drivers build config 00:02:12.153 dma/dpaa: not in enabled drivers build config 00:02:12.153 dma/dpaa2: not in enabled drivers build config 00:02:12.153 dma/hisilicon: not in enabled drivers build config 00:02:12.153 dma/idxd: not in enabled drivers build config 00:02:12.153 dma/ioat: not in enabled drivers build config 00:02:12.153 dma/skeleton: not in enabled drivers build config 00:02:12.153 net/af_packet: not in enabled drivers build config 00:02:12.153 net/af_xdp: not in enabled drivers build config 00:02:12.153 net/ark: not in enabled drivers build config 00:02:12.153 net/atlantic: not in enabled drivers build config 00:02:12.153 net/avp: not in enabled drivers build config 00:02:12.153 net/axgbe: not in enabled drivers build config 00:02:12.153 net/bnx2x: not in enabled drivers build config 00:02:12.153 net/bnxt: not in enabled drivers build config 00:02:12.153 net/bonding: not in enabled drivers build config 00:02:12.153 net/cnxk: not in enabled drivers build config 00:02:12.153 net/cpfl: not in enabled drivers build config 00:02:12.153 net/cxgbe: not in enabled drivers build config 00:02:12.153 net/dpaa: not in enabled drivers build config 00:02:12.153 net/dpaa2: not in enabled drivers build config 00:02:12.153 net/e1000: not in enabled drivers build config 00:02:12.153 net/ena: not in enabled drivers build config 00:02:12.153 net/enetc: not in enabled drivers build config 00:02:12.153 net/enetfec: not in enabled drivers build config 00:02:12.153 net/enic: not in enabled drivers build config 00:02:12.153 net/failsafe: not in enabled drivers build config 00:02:12.153 net/fm10k: not in enabled drivers build config 00:02:12.153 net/gve: not in enabled drivers build config 00:02:12.153 net/hinic: not in enabled drivers build config 00:02:12.153 net/hns3: not in enabled drivers build config 00:02:12.153 net/i40e: not in enabled drivers build config 00:02:12.153 net/iavf: not in enabled drivers build config 00:02:12.153 net/ice: not in enabled drivers build config 00:02:12.153 net/idpf: not in enabled drivers build config 00:02:12.153 net/igc: not in enabled drivers build config 00:02:12.153 net/ionic: not in enabled drivers build config 00:02:12.153 net/ipn3ke: not in enabled drivers build config 00:02:12.153 net/ixgbe: not in enabled drivers build config 00:02:12.153 net/mana: not in enabled drivers build config 00:02:12.153 net/memif: not in enabled drivers build config 00:02:12.153 net/mlx4: not in enabled drivers build config 00:02:12.153 net/mlx5: not in enabled drivers build config 00:02:12.153 net/mvneta: not in enabled drivers build config 00:02:12.153 net/mvpp2: not in enabled drivers build config 00:02:12.153 net/netvsc: not in enabled drivers build config 00:02:12.153 net/nfb: not in enabled drivers build config 00:02:12.153 net/nfp: not in enabled drivers build config 00:02:12.153 net/ngbe: not in enabled drivers build config 00:02:12.153 net/null: not in enabled drivers build config 00:02:12.153 net/octeontx: not in enabled drivers build config 00:02:12.153 net/octeon_ep: not in enabled drivers build config 00:02:12.153 net/pcap: not in enabled drivers build config 00:02:12.153 net/pfe: not in enabled drivers build config 00:02:12.153 net/qede: not in enabled drivers build config 00:02:12.153 net/ring: not in enabled drivers build config 00:02:12.153 net/sfc: not in enabled drivers build config 00:02:12.153 net/softnic: not in enabled drivers build config 00:02:12.153 net/tap: not in enabled drivers build config 00:02:12.153 net/thunderx: not in enabled drivers build config 00:02:12.153 net/txgbe: not in enabled drivers build config 00:02:12.153 net/vdev_netvsc: not in enabled drivers build config 00:02:12.153 net/vhost: not in enabled drivers build config 00:02:12.153 net/virtio: not in enabled drivers build config 00:02:12.153 net/vmxnet3: not in enabled drivers build config 00:02:12.153 raw/*: missing internal dependency, "rawdev" 00:02:12.153 crypto/armv8: not in enabled drivers build config 00:02:12.153 crypto/bcmfs: not in enabled drivers build config 00:02:12.153 crypto/caam_jr: not in enabled drivers build config 00:02:12.153 crypto/ccp: not in enabled drivers build config 00:02:12.153 crypto/cnxk: not in enabled drivers build config 00:02:12.153 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.153 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.153 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.153 crypto/mlx5: not in enabled drivers build config 00:02:12.153 crypto/mvsam: not in enabled drivers build config 00:02:12.153 crypto/nitrox: not in enabled drivers build config 00:02:12.153 crypto/null: not in enabled drivers build config 00:02:12.153 crypto/octeontx: not in enabled drivers build config 00:02:12.153 crypto/openssl: not in enabled drivers build config 00:02:12.153 crypto/scheduler: not in enabled drivers build config 00:02:12.153 crypto/uadk: not in enabled drivers build config 00:02:12.153 crypto/virtio: not in enabled drivers build config 00:02:12.153 compress/isal: not in enabled drivers build config 00:02:12.153 compress/mlx5: not in enabled drivers build config 00:02:12.153 compress/nitrox: not in enabled drivers build config 00:02:12.153 compress/octeontx: not in enabled drivers build config 00:02:12.153 compress/zlib: not in enabled drivers build config 00:02:12.153 regex/*: missing internal dependency, "regexdev" 00:02:12.153 ml/*: missing internal dependency, "mldev" 00:02:12.153 vdpa/ifc: not in enabled drivers build config 00:02:12.153 vdpa/mlx5: not in enabled drivers build config 00:02:12.153 vdpa/nfp: not in enabled drivers build config 00:02:12.153 vdpa/sfc: not in enabled drivers build config 00:02:12.153 event/*: missing internal dependency, "eventdev" 00:02:12.153 baseband/*: missing internal dependency, "bbdev" 00:02:12.153 gpu/*: missing internal dependency, "gpudev" 00:02:12.153 00:02:12.153 00:02:12.153 Build targets in project: 85 00:02:12.153 00:02:12.153 DPDK 24.03.0 00:02:12.153 00:02:12.153 User defined options 00:02:12.153 buildtype : debug 00:02:12.153 default_library : shared 00:02:12.153 libdir : lib 00:02:12.153 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.153 b_sanitize : address 00:02:12.153 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.153 c_link_args : 00:02:12.153 cpu_instruction_set: native 00:02:12.153 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:12.153 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:12.153 enable_docs : false 00:02:12.153 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.153 enable_kmods : false 00:02:12.153 max_lcores : 128 00:02:12.153 tests : false 00:02:12.153 00:02:12.153 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.153 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:12.153 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.153 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.153 [3/268] Linking static target lib/librte_kvargs.a 00:02:12.153 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.153 [5/268] Linking static target lib/librte_log.a 00:02:12.153 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.414 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.414 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.414 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.414 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.414 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.414 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.414 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.414 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.674 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.674 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.674 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.674 [18/268] Linking static target lib/librte_telemetry.a 00:02:12.934 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.934 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.934 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.934 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.934 [23/268] Linking target lib/librte_log.so.24.1 00:02:12.934 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.934 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.195 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.195 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.195 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.195 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.195 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.195 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:13.455 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.455 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.455 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.455 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.455 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.455 [37/268] Linking target lib/librte_telemetry.so.24.1 00:02:13.715 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.715 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.715 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.715 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.715 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.715 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.715 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.715 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.976 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.976 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.976 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.237 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.237 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.237 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.237 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.237 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.497 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.497 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.497 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.497 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.497 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.758 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.758 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.758 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.758 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.758 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.019 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.019 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.019 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.019 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.019 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.279 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.279 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.540 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.540 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.540 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.540 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.540 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.540 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.540 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.540 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.801 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.801 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.801 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.062 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.062 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.062 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.062 [85/268] Linking static target lib/librte_ring.a 00:02:16.062 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.062 [87/268] Linking static target lib/librte_eal.a 00:02:16.322 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.322 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.322 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.322 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.322 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.322 [93/268] Linking static target lib/librte_mempool.a 00:02:16.582 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.582 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.582 [96/268] Linking static target lib/librte_rcu.a 00:02:16.582 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.582 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.582 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.582 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.842 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.842 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.842 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.102 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.102 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.102 [106/268] Linking static target lib/librte_meter.a 00:02:17.102 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.102 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.102 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.102 [110/268] Linking static target lib/librte_mbuf.a 00:02:17.102 [111/268] Linking static target lib/librte_net.a 00:02:17.367 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.367 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.367 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.367 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.647 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.647 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.647 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.926 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.926 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.206 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.206 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.206 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.206 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.473 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.473 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.473 [127/268] Linking static target lib/librte_pci.a 00:02:18.473 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.473 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.733 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.733 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.733 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.733 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.733 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.733 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.733 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.733 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.733 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.733 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.733 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.993 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.993 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.993 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.993 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.993 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.993 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.993 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.993 [148/268] Linking static target lib/librte_cmdline.a 00:02:19.253 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.254 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.514 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.514 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.514 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.514 [154/268] Linking static target lib/librte_timer.a 00:02:19.514 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.774 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.774 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.774 [158/268] Linking static target lib/librte_compressdev.a 00:02:20.034 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.035 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.035 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.295 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.295 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.295 [164/268] Linking static target lib/librte_dmadev.a 00:02:20.295 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.295 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.295 [167/268] Linking static target lib/librte_hash.a 00:02:20.555 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.555 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.555 [170/268] Linking static target lib/librte_ethdev.a 00:02:20.555 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.555 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.555 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.814 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.814 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.814 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.074 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.074 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.074 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.074 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.074 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.335 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.335 [183/268] Linking static target lib/librte_power.a 00:02:21.335 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.335 [185/268] Linking static target lib/librte_cryptodev.a 00:02:21.596 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.596 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.596 [188/268] Linking static target lib/librte_reorder.a 00:02:21.596 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.596 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.856 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.856 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.856 [193/268] Linking static target lib/librte_security.a 00:02:22.116 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.376 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.376 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.636 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.636 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.636 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.636 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.896 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.896 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.158 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.158 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.158 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.158 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.419 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.419 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.419 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.419 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.679 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.679 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.679 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.679 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.679 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.679 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.679 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.679 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.940 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.940 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:23.940 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.940 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.940 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.940 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.940 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.940 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.200 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.140 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.521 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.781 [230/268] Linking target lib/librte_eal.so.24.1 00:02:26.781 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:27.041 [232/268] Linking target lib/librte_timer.so.24.1 00:02:27.041 [233/268] Linking target lib/librte_meter.so.24.1 00:02:27.041 [234/268] Linking target lib/librte_pci.so.24.1 00:02:27.041 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:27.041 [236/268] Linking target lib/librte_ring.so.24.1 00:02:27.041 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:27.041 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:27.041 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:27.041 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:27.041 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:27.041 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:27.041 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:27.301 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:27.301 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:27.301 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:27.301 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:27.301 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:27.301 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:27.561 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.561 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:27.561 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:27.561 [253/268] Linking target lib/librte_net.so.24.1 00:02:27.561 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:27.821 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.821 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.821 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:27.821 [258/268] Linking target lib/librte_security.so.24.1 00:02:27.821 [259/268] Linking target lib/librte_hash.so.24.1 00:02:27.821 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:29.202 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.202 [262/268] Linking static target lib/librte_vhost.a 00:02:29.202 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.462 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:29.462 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.722 [266/268] Linking target lib/librte_power.so.24.1 00:02:32.282 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.282 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.282 INFO: autodetecting backend as ninja 00:02:32.282 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.410 CC lib/ut_mock/mock.o 00:02:50.410 CC lib/ut/ut.o 00:02:50.410 CC lib/log/log.o 00:02:50.410 CC lib/log/log_flags.o 00:02:50.410 CC lib/log/log_deprecated.o 00:02:50.410 LIB libspdk_log.a 00:02:50.410 LIB libspdk_ut_mock.a 00:02:50.410 LIB libspdk_ut.a 00:02:50.410 SO libspdk_ut_mock.so.6.0 00:02:50.410 SO libspdk_log.so.7.1 00:02:50.410 SO libspdk_ut.so.2.0 00:02:50.410 SYMLINK libspdk_ut_mock.so 00:02:50.410 SYMLINK libspdk_ut.so 00:02:50.410 SYMLINK libspdk_log.so 00:02:50.410 CC lib/dma/dma.o 00:02:50.410 CXX lib/trace_parser/trace.o 00:02:50.410 CC lib/ioat/ioat.o 00:02:50.410 CC lib/util/base64.o 00:02:50.410 CC lib/util/bit_array.o 00:02:50.410 CC lib/util/cpuset.o 00:02:50.410 CC lib/util/crc16.o 00:02:50.410 CC lib/util/crc32.o 00:02:50.410 CC lib/util/crc32c.o 00:02:50.410 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.410 CC lib/util/crc32_ieee.o 00:02:50.410 CC lib/vfio_user/host/vfio_user.o 00:02:50.410 LIB libspdk_dma.a 00:02:50.410 CC lib/util/crc64.o 00:02:50.410 SO libspdk_dma.so.5.0 00:02:50.410 CC lib/util/dif.o 00:02:50.410 CC lib/util/fd.o 00:02:50.410 CC lib/util/fd_group.o 00:02:50.410 SYMLINK libspdk_dma.so 00:02:50.410 CC lib/util/file.o 00:02:50.410 CC lib/util/hexlify.o 00:02:50.410 LIB libspdk_ioat.a 00:02:50.410 CC lib/util/iov.o 00:02:50.410 SO libspdk_ioat.so.7.0 00:02:50.410 CC lib/util/math.o 00:02:50.410 SYMLINK libspdk_ioat.so 00:02:50.410 CC lib/util/net.o 00:02:50.410 CC lib/util/pipe.o 00:02:50.410 LIB libspdk_vfio_user.a 00:02:50.410 CC lib/util/strerror_tls.o 00:02:50.410 CC lib/util/string.o 00:02:50.410 SO libspdk_vfio_user.so.5.0 00:02:50.410 CC lib/util/uuid.o 00:02:50.410 SYMLINK libspdk_vfio_user.so 00:02:50.410 CC lib/util/xor.o 00:02:50.410 CC lib/util/zipf.o 00:02:50.410 CC lib/util/md5.o 00:02:50.410 LIB libspdk_util.a 00:02:50.410 SO libspdk_util.so.10.0 00:02:50.410 LIB libspdk_trace_parser.a 00:02:50.669 SO libspdk_trace_parser.so.6.0 00:02:50.669 SYMLINK libspdk_util.so 00:02:50.669 SYMLINK libspdk_trace_parser.so 00:02:50.669 CC lib/vmd/vmd.o 00:02:50.669 CC lib/vmd/led.o 00:02:50.669 CC lib/conf/conf.o 00:02:50.669 CC lib/json/json_parse.o 00:02:50.669 CC lib/rdma_provider/common.o 00:02:50.669 CC lib/json/json_util.o 00:02:50.670 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.670 CC lib/idxd/idxd.o 00:02:50.670 CC lib/rdma_utils/rdma_utils.o 00:02:50.670 CC lib/env_dpdk/env.o 00:02:50.929 CC lib/env_dpdk/memory.o 00:02:50.929 CC lib/env_dpdk/pci.o 00:02:50.929 LIB libspdk_rdma_provider.a 00:02:50.929 SO libspdk_rdma_provider.so.6.0 00:02:50.929 LIB libspdk_conf.a 00:02:50.929 CC lib/env_dpdk/init.o 00:02:50.929 CC lib/json/json_write.o 00:02:50.929 SO libspdk_conf.so.6.0 00:02:50.929 SYMLINK libspdk_rdma_provider.so 00:02:50.929 LIB libspdk_rdma_utils.a 00:02:50.929 CC lib/idxd/idxd_user.o 00:02:50.929 SYMLINK libspdk_conf.so 00:02:50.929 SO libspdk_rdma_utils.so.1.0 00:02:50.929 CC lib/idxd/idxd_kernel.o 00:02:51.189 SYMLINK libspdk_rdma_utils.so 00:02:51.189 CC lib/env_dpdk/threads.o 00:02:51.189 CC lib/env_dpdk/pci_ioat.o 00:02:51.189 CC lib/env_dpdk/pci_virtio.o 00:02:51.189 CC lib/env_dpdk/pci_vmd.o 00:02:51.189 LIB libspdk_json.a 00:02:51.189 CC lib/env_dpdk/pci_idxd.o 00:02:51.189 SO libspdk_json.so.6.0 00:02:51.189 CC lib/env_dpdk/pci_event.o 00:02:51.189 CC lib/env_dpdk/sigbus_handler.o 00:02:51.189 SYMLINK libspdk_json.so 00:02:51.189 CC lib/env_dpdk/pci_dpdk.o 00:02:51.450 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.450 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.450 LIB libspdk_idxd.a 00:02:51.450 SO libspdk_idxd.so.12.1 00:02:51.450 LIB libspdk_vmd.a 00:02:51.450 SO libspdk_vmd.so.6.0 00:02:51.450 SYMLINK libspdk_idxd.so 00:02:51.450 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.450 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.450 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.450 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.450 SYMLINK libspdk_vmd.so 00:02:51.709 LIB libspdk_jsonrpc.a 00:02:51.709 SO libspdk_jsonrpc.so.6.0 00:02:51.969 SYMLINK libspdk_jsonrpc.so 00:02:52.229 LIB libspdk_env_dpdk.a 00:02:52.229 SO libspdk_env_dpdk.so.15.1 00:02:52.229 CC lib/rpc/rpc.o 00:02:52.489 SYMLINK libspdk_env_dpdk.so 00:02:52.489 LIB libspdk_rpc.a 00:02:52.489 SO libspdk_rpc.so.6.0 00:02:52.489 SYMLINK libspdk_rpc.so 00:02:53.060 CC lib/keyring/keyring.o 00:02:53.060 CC lib/keyring/keyring_rpc.o 00:02:53.060 CC lib/notify/notify_rpc.o 00:02:53.060 CC lib/notify/notify.o 00:02:53.060 CC lib/trace/trace.o 00:02:53.060 CC lib/trace/trace_flags.o 00:02:53.060 CC lib/trace/trace_rpc.o 00:02:53.060 LIB libspdk_notify.a 00:02:53.060 SO libspdk_notify.so.6.0 00:02:53.320 LIB libspdk_keyring.a 00:02:53.320 SYMLINK libspdk_notify.so 00:02:53.320 SO libspdk_keyring.so.2.0 00:02:53.320 LIB libspdk_trace.a 00:02:53.320 SO libspdk_trace.so.11.0 00:02:53.320 SYMLINK libspdk_keyring.so 00:02:53.320 SYMLINK libspdk_trace.so 00:02:53.891 CC lib/thread/thread.o 00:02:53.891 CC lib/thread/iobuf.o 00:02:53.891 CC lib/sock/sock.o 00:02:53.891 CC lib/sock/sock_rpc.o 00:02:54.152 LIB libspdk_sock.a 00:02:54.152 SO libspdk_sock.so.10.0 00:02:54.152 SYMLINK libspdk_sock.so 00:02:54.722 CC lib/nvme/nvme_ctrlr.o 00:02:54.722 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.722 CC lib/nvme/nvme_fabric.o 00:02:54.722 CC lib/nvme/nvme_ns_cmd.o 00:02:54.722 CC lib/nvme/nvme_ns.o 00:02:54.722 CC lib/nvme/nvme_pcie_common.o 00:02:54.722 CC lib/nvme/nvme_pcie.o 00:02:54.722 CC lib/nvme/nvme.o 00:02:54.722 CC lib/nvme/nvme_qpair.o 00:02:55.291 LIB libspdk_thread.a 00:02:55.291 SO libspdk_thread.so.11.0 00:02:55.291 CC lib/nvme/nvme_quirks.o 00:02:55.291 CC lib/nvme/nvme_transport.o 00:02:55.291 SYMLINK libspdk_thread.so 00:02:55.291 CC lib/nvme/nvme_discovery.o 00:02:55.291 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.291 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.552 CC lib/nvme/nvme_tcp.o 00:02:55.552 CC lib/nvme/nvme_opal.o 00:02:55.552 CC lib/nvme/nvme_io_msg.o 00:02:55.552 CC lib/nvme/nvme_poll_group.o 00:02:55.812 CC lib/nvme/nvme_zns.o 00:02:55.812 CC lib/nvme/nvme_stubs.o 00:02:55.812 CC lib/nvme/nvme_auth.o 00:02:55.812 CC lib/nvme/nvme_cuse.o 00:02:55.812 CC lib/nvme/nvme_rdma.o 00:02:56.072 CC lib/accel/accel.o 00:02:56.072 CC lib/blob/blobstore.o 00:02:56.332 CC lib/blob/request.o 00:02:56.333 CC lib/init/json_config.o 00:02:56.333 CC lib/blob/zeroes.o 00:02:56.333 CC lib/blob/blob_bs_dev.o 00:02:56.593 CC lib/init/subsystem.o 00:02:56.593 CC lib/init/subsystem_rpc.o 00:02:56.593 CC lib/init/rpc.o 00:02:56.593 CC lib/accel/accel_rpc.o 00:02:56.593 CC lib/accel/accel_sw.o 00:02:56.853 CC lib/virtio/virtio.o 00:02:56.853 LIB libspdk_init.a 00:02:56.853 SO libspdk_init.so.6.0 00:02:56.853 CC lib/fsdev/fsdev.o 00:02:56.853 CC lib/fsdev/fsdev_io.o 00:02:56.853 SYMLINK libspdk_init.so 00:02:56.853 CC lib/fsdev/fsdev_rpc.o 00:02:56.853 CC lib/virtio/virtio_vhost_user.o 00:02:57.114 CC lib/virtio/virtio_vfio_user.o 00:02:57.114 CC lib/virtio/virtio_pci.o 00:02:57.114 CC lib/event/app.o 00:02:57.114 CC lib/event/reactor.o 00:02:57.374 CC lib/event/log_rpc.o 00:02:57.374 CC lib/event/app_rpc.o 00:02:57.374 LIB libspdk_accel.a 00:02:57.374 LIB libspdk_nvme.a 00:02:57.374 CC lib/event/scheduler_static.o 00:02:57.374 SO libspdk_accel.so.16.0 00:02:57.374 LIB libspdk_virtio.a 00:02:57.374 SYMLINK libspdk_accel.so 00:02:57.374 SO libspdk_virtio.so.7.0 00:02:57.374 SO libspdk_nvme.so.14.1 00:02:57.634 SYMLINK libspdk_virtio.so 00:02:57.634 LIB libspdk_fsdev.a 00:02:57.634 CC lib/bdev/bdev_rpc.o 00:02:57.634 SO libspdk_fsdev.so.2.0 00:02:57.634 CC lib/bdev/bdev_zone.o 00:02:57.634 CC lib/bdev/bdev.o 00:02:57.634 CC lib/bdev/part.o 00:02:57.634 LIB libspdk_event.a 00:02:57.634 CC lib/bdev/scsi_nvme.o 00:02:57.634 SYMLINK libspdk_fsdev.so 00:02:57.634 SO libspdk_event.so.14.0 00:02:57.634 SYMLINK libspdk_nvme.so 00:02:57.894 SYMLINK libspdk_event.so 00:02:57.894 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.838 LIB libspdk_fuse_dispatcher.a 00:02:58.838 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.838 SYMLINK libspdk_fuse_dispatcher.so 00:02:59.778 LIB libspdk_blob.a 00:02:59.778 SO libspdk_blob.so.11.0 00:03:00.038 SYMLINK libspdk_blob.so 00:03:00.302 LIB libspdk_bdev.a 00:03:00.302 CC lib/blobfs/blobfs.o 00:03:00.302 CC lib/blobfs/tree.o 00:03:00.302 CC lib/lvol/lvol.o 00:03:00.302 SO libspdk_bdev.so.17.0 00:03:00.576 SYMLINK libspdk_bdev.so 00:03:00.847 CC lib/nvmf/ctrlr.o 00:03:00.847 CC lib/nvmf/ctrlr_discovery.o 00:03:00.847 CC lib/nvmf/ctrlr_bdev.o 00:03:00.847 CC lib/nvmf/subsystem.o 00:03:00.847 CC lib/ublk/ublk.o 00:03:00.847 CC lib/nbd/nbd.o 00:03:00.847 CC lib/scsi/dev.o 00:03:00.847 CC lib/ftl/ftl_core.o 00:03:00.847 CC lib/scsi/lun.o 00:03:01.107 CC lib/ftl/ftl_init.o 00:03:01.107 CC lib/nbd/nbd_rpc.o 00:03:01.107 LIB libspdk_blobfs.a 00:03:01.107 SO libspdk_blobfs.so.10.0 00:03:01.107 CC lib/nvmf/nvmf.o 00:03:01.367 SYMLINK libspdk_blobfs.so 00:03:01.367 CC lib/nvmf/nvmf_rpc.o 00:03:01.367 CC lib/scsi/port.o 00:03:01.367 CC lib/ftl/ftl_layout.o 00:03:01.367 LIB libspdk_nbd.a 00:03:01.367 LIB libspdk_lvol.a 00:03:01.367 SO libspdk_nbd.so.7.0 00:03:01.367 CC lib/ublk/ublk_rpc.o 00:03:01.367 SO libspdk_lvol.so.10.0 00:03:01.367 SYMLINK libspdk_nbd.so 00:03:01.367 CC lib/scsi/scsi.o 00:03:01.367 CC lib/scsi/scsi_bdev.o 00:03:01.367 CC lib/scsi/scsi_pr.o 00:03:01.367 SYMLINK libspdk_lvol.so 00:03:01.367 CC lib/scsi/scsi_rpc.o 00:03:01.628 LIB libspdk_ublk.a 00:03:01.628 CC lib/ftl/ftl_debug.o 00:03:01.628 SO libspdk_ublk.so.3.0 00:03:01.628 CC lib/nvmf/transport.o 00:03:01.628 CC lib/ftl/ftl_io.o 00:03:01.628 SYMLINK libspdk_ublk.so 00:03:01.628 CC lib/scsi/task.o 00:03:01.628 CC lib/ftl/ftl_sb.o 00:03:01.628 CC lib/ftl/ftl_l2p.o 00:03:01.888 CC lib/ftl/ftl_l2p_flat.o 00:03:01.888 CC lib/ftl/ftl_nv_cache.o 00:03:01.888 LIB libspdk_scsi.a 00:03:01.888 CC lib/ftl/ftl_band.o 00:03:01.888 CC lib/nvmf/tcp.o 00:03:01.888 SO libspdk_scsi.so.9.0 00:03:01.888 CC lib/ftl/ftl_band_ops.o 00:03:01.888 CC lib/ftl/ftl_writer.o 00:03:02.149 SYMLINK libspdk_scsi.so 00:03:02.149 CC lib/nvmf/stubs.o 00:03:02.149 CC lib/ftl/ftl_rq.o 00:03:02.149 CC lib/ftl/ftl_reloc.o 00:03:02.149 CC lib/nvmf/mdns_server.o 00:03:02.149 CC lib/nvmf/rdma.o 00:03:02.149 CC lib/nvmf/auth.o 00:03:02.410 CC lib/iscsi/conn.o 00:03:02.410 CC lib/vhost/vhost.o 00:03:02.410 CC lib/vhost/vhost_rpc.o 00:03:02.670 CC lib/ftl/ftl_l2p_cache.o 00:03:02.670 CC lib/iscsi/init_grp.o 00:03:02.670 CC lib/ftl/ftl_p2l.o 00:03:02.930 CC lib/iscsi/iscsi.o 00:03:02.930 CC lib/ftl/ftl_p2l_log.o 00:03:02.930 CC lib/vhost/vhost_scsi.o 00:03:03.191 CC lib/iscsi/param.o 00:03:03.191 CC lib/vhost/vhost_blk.o 00:03:03.191 CC lib/vhost/rte_vhost_user.o 00:03:03.191 CC lib/iscsi/portal_grp.o 00:03:03.191 CC lib/iscsi/tgt_node.o 00:03:03.191 CC lib/ftl/mngt/ftl_mngt.o 00:03:03.451 CC lib/iscsi/iscsi_subsystem.o 00:03:03.452 CC lib/iscsi/iscsi_rpc.o 00:03:03.712 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:03.712 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.712 CC lib/iscsi/task.o 00:03:03.712 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.712 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.712 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.973 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.233 CC lib/ftl/utils/ftl_conf.o 00:03:04.233 LIB libspdk_vhost.a 00:03:04.233 CC lib/ftl/utils/ftl_md.o 00:03:04.233 CC lib/ftl/utils/ftl_mempool.o 00:03:04.233 SO libspdk_vhost.so.8.0 00:03:04.233 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.233 CC lib/ftl/utils/ftl_property.o 00:03:04.233 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.233 SYMLINK libspdk_vhost.so 00:03:04.233 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.233 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.493 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.493 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.493 LIB libspdk_iscsi.a 00:03:04.493 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.493 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.493 SO libspdk_iscsi.so.8.0 00:03:04.493 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.493 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.493 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.493 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.493 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.493 LIB libspdk_nvmf.a 00:03:04.493 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.753 SYMLINK libspdk_iscsi.so 00:03:04.753 CC lib/ftl/base/ftl_base_dev.o 00:03:04.753 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.753 CC lib/ftl/ftl_trace.o 00:03:04.753 SO libspdk_nvmf.so.20.0 00:03:05.013 SYMLINK libspdk_nvmf.so 00:03:05.014 LIB libspdk_ftl.a 00:03:05.273 SO libspdk_ftl.so.9.0 00:03:05.533 SYMLINK libspdk_ftl.so 00:03:05.793 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.053 CC module/blob/bdev/blob_bdev.o 00:03:06.053 CC module/keyring/file/keyring.o 00:03:06.053 CC module/accel/dsa/accel_dsa.o 00:03:06.053 CC module/fsdev/aio/fsdev_aio.o 00:03:06.053 CC module/accel/error/accel_error.o 00:03:06.053 CC module/accel/ioat/accel_ioat.o 00:03:06.053 CC module/keyring/linux/keyring.o 00:03:06.053 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.053 CC module/sock/posix/posix.o 00:03:06.053 LIB libspdk_env_dpdk_rpc.a 00:03:06.053 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.053 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.053 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.053 CC module/keyring/file/keyring_rpc.o 00:03:06.053 CC module/keyring/linux/keyring_rpc.o 00:03:06.053 CC module/accel/error/accel_error_rpc.o 00:03:06.053 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.053 LIB libspdk_scheduler_dynamic.a 00:03:06.053 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.053 LIB libspdk_accel_ioat.a 00:03:06.053 LIB libspdk_keyring_file.a 00:03:06.313 LIB libspdk_blob_bdev.a 00:03:06.313 SO libspdk_accel_ioat.so.6.0 00:03:06.313 LIB libspdk_keyring_linux.a 00:03:06.313 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.313 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.313 SO libspdk_keyring_file.so.2.0 00:03:06.313 SO libspdk_blob_bdev.so.11.0 00:03:06.313 SO libspdk_keyring_linux.so.1.0 00:03:06.313 LIB libspdk_accel_error.a 00:03:06.313 SYMLINK libspdk_accel_ioat.so 00:03:06.313 SYMLINK libspdk_keyring_file.so 00:03:06.313 SO libspdk_accel_error.so.2.0 00:03:06.313 SYMLINK libspdk_keyring_linux.so 00:03:06.313 CC module/fsdev/aio/linux_aio_mgr.o 00:03:06.313 SYMLINK libspdk_blob_bdev.so 00:03:06.313 SYMLINK libspdk_accel_error.so 00:03:06.313 LIB libspdk_accel_dsa.a 00:03:06.313 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.313 SO libspdk_accel_dsa.so.5.0 00:03:06.313 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.313 SYMLINK libspdk_accel_dsa.so 00:03:06.573 CC module/accel/iaa/accel_iaa.o 00:03:06.573 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.573 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.573 CC module/bdev/delay/vbdev_delay.o 00:03:06.573 CC module/bdev/error/vbdev_error.o 00:03:06.573 LIB libspdk_scheduler_gscheduler.a 00:03:06.573 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.573 CC module/bdev/gpt/gpt.o 00:03:06.573 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.573 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.573 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.573 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.573 LIB libspdk_fsdev_aio.a 00:03:06.573 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.573 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.573 SO libspdk_fsdev_aio.so.1.0 00:03:06.573 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.573 LIB libspdk_sock_posix.a 00:03:06.833 SYMLINK libspdk_fsdev_aio.so 00:03:06.833 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.833 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.833 SO libspdk_sock_posix.so.6.0 00:03:06.833 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.833 LIB libspdk_accel_iaa.a 00:03:06.833 LIB libspdk_bdev_error.a 00:03:06.833 SYMLINK libspdk_sock_posix.so 00:03:06.833 SO libspdk_accel_iaa.so.3.0 00:03:06.833 SO libspdk_bdev_error.so.6.0 00:03:06.833 SYMLINK libspdk_accel_iaa.so 00:03:06.833 LIB libspdk_blobfs_bdev.a 00:03:06.833 SYMLINK libspdk_bdev_error.so 00:03:06.833 LIB libspdk_bdev_delay.a 00:03:06.833 SO libspdk_blobfs_bdev.so.6.0 00:03:06.833 CC module/bdev/malloc/bdev_malloc.o 00:03:06.833 SO libspdk_bdev_delay.so.6.0 00:03:06.833 CC module/bdev/null/bdev_null.o 00:03:07.094 SYMLINK libspdk_blobfs_bdev.so 00:03:07.094 SYMLINK libspdk_bdev_delay.so 00:03:07.094 LIB libspdk_bdev_gpt.a 00:03:07.094 CC module/bdev/null/bdev_null_rpc.o 00:03:07.094 SO libspdk_bdev_gpt.so.6.0 00:03:07.094 CC module/bdev/nvme/bdev_nvme.o 00:03:07.094 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.094 CC module/bdev/raid/bdev_raid.o 00:03:07.094 SYMLINK libspdk_bdev_gpt.so 00:03:07.094 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.094 CC module/bdev/split/vbdev_split.o 00:03:07.094 LIB libspdk_bdev_lvol.a 00:03:07.094 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.094 SO libspdk_bdev_lvol.so.6.0 00:03:07.094 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.094 LIB libspdk_bdev_null.a 00:03:07.354 SO libspdk_bdev_null.so.6.0 00:03:07.354 SYMLINK libspdk_bdev_lvol.so 00:03:07.354 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.354 SYMLINK libspdk_bdev_null.so 00:03:07.354 CC module/bdev/nvme/nvme_rpc.o 00:03:07.354 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.354 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.354 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.354 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.354 CC module/bdev/nvme/vbdev_opal.o 00:03:07.354 LIB libspdk_bdev_passthru.a 00:03:07.354 LIB libspdk_bdev_zone_block.a 00:03:07.354 LIB libspdk_bdev_malloc.a 00:03:07.354 SO libspdk_bdev_passthru.so.6.0 00:03:07.354 SO libspdk_bdev_zone_block.so.6.0 00:03:07.615 SO libspdk_bdev_malloc.so.6.0 00:03:07.615 LIB libspdk_bdev_split.a 00:03:07.615 SYMLINK libspdk_bdev_zone_block.so 00:03:07.615 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.615 SYMLINK libspdk_bdev_passthru.so 00:03:07.615 SO libspdk_bdev_split.so.6.0 00:03:07.615 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.615 SYMLINK libspdk_bdev_malloc.so 00:03:07.615 SYMLINK libspdk_bdev_split.so 00:03:07.615 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.615 CC module/bdev/aio/bdev_aio.o 00:03:07.615 CC module/bdev/raid/raid0.o 00:03:07.615 CC module/bdev/ftl/bdev_ftl.o 00:03:07.876 CC module/bdev/raid/raid1.o 00:03:07.876 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.876 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.876 CC module/bdev/raid/concat.o 00:03:07.876 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.876 CC module/bdev/raid/raid5f.o 00:03:08.136 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.136 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.136 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.136 LIB libspdk_bdev_aio.a 00:03:08.136 SO libspdk_bdev_aio.so.6.0 00:03:08.136 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.136 SYMLINK libspdk_bdev_aio.so 00:03:08.136 LIB libspdk_bdev_ftl.a 00:03:08.136 LIB libspdk_bdev_iscsi.a 00:03:08.395 SO libspdk_bdev_iscsi.so.6.0 00:03:08.396 SO libspdk_bdev_ftl.so.6.0 00:03:08.396 LIB libspdk_bdev_virtio.a 00:03:08.396 SYMLINK libspdk_bdev_iscsi.so 00:03:08.396 SYMLINK libspdk_bdev_ftl.so 00:03:08.396 SO libspdk_bdev_virtio.so.6.0 00:03:08.396 SYMLINK libspdk_bdev_virtio.so 00:03:08.396 LIB libspdk_bdev_raid.a 00:03:08.655 SO libspdk_bdev_raid.so.6.0 00:03:08.655 SYMLINK libspdk_bdev_raid.so 00:03:09.596 LIB libspdk_bdev_nvme.a 00:03:09.856 SO libspdk_bdev_nvme.so.7.0 00:03:09.856 SYMLINK libspdk_bdev_nvme.so 00:03:10.426 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.426 CC module/event/subsystems/keyring/keyring.o 00:03:10.426 CC module/event/subsystems/sock/sock.o 00:03:10.426 CC module/event/subsystems/fsdev/fsdev.o 00:03:10.426 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.426 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.426 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.426 CC module/event/subsystems/vmd/vmd.o 00:03:10.426 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.686 LIB libspdk_event_sock.a 00:03:10.686 LIB libspdk_event_fsdev.a 00:03:10.686 LIB libspdk_event_scheduler.a 00:03:10.686 LIB libspdk_event_keyring.a 00:03:10.686 LIB libspdk_event_vhost_blk.a 00:03:10.686 LIB libspdk_event_vmd.a 00:03:10.686 LIB libspdk_event_iobuf.a 00:03:10.686 SO libspdk_event_sock.so.5.0 00:03:10.686 SO libspdk_event_fsdev.so.1.0 00:03:10.686 SO libspdk_event_scheduler.so.4.0 00:03:10.686 SO libspdk_event_keyring.so.1.0 00:03:10.686 SO libspdk_event_vhost_blk.so.3.0 00:03:10.686 SO libspdk_event_vmd.so.6.0 00:03:10.686 SO libspdk_event_iobuf.so.3.0 00:03:10.686 SYMLINK libspdk_event_sock.so 00:03:10.686 SYMLINK libspdk_event_fsdev.so 00:03:10.686 SYMLINK libspdk_event_scheduler.so 00:03:10.686 SYMLINK libspdk_event_vhost_blk.so 00:03:10.686 SYMLINK libspdk_event_keyring.so 00:03:10.686 SYMLINK libspdk_event_vmd.so 00:03:10.686 SYMLINK libspdk_event_iobuf.so 00:03:11.255 CC module/event/subsystems/accel/accel.o 00:03:11.255 LIB libspdk_event_accel.a 00:03:11.255 SO libspdk_event_accel.so.6.0 00:03:11.515 SYMLINK libspdk_event_accel.so 00:03:11.775 CC module/event/subsystems/bdev/bdev.o 00:03:12.036 LIB libspdk_event_bdev.a 00:03:12.036 SO libspdk_event_bdev.so.6.0 00:03:12.036 SYMLINK libspdk_event_bdev.so 00:03:12.608 CC module/event/subsystems/scsi/scsi.o 00:03:12.608 CC module/event/subsystems/ublk/ublk.o 00:03:12.608 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.608 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.608 CC module/event/subsystems/nbd/nbd.o 00:03:12.608 LIB libspdk_event_ublk.a 00:03:12.608 LIB libspdk_event_nbd.a 00:03:12.608 LIB libspdk_event_scsi.a 00:03:12.608 SO libspdk_event_ublk.so.3.0 00:03:12.608 SO libspdk_event_scsi.so.6.0 00:03:12.608 SO libspdk_event_nbd.so.6.0 00:03:12.608 LIB libspdk_event_nvmf.a 00:03:12.608 SYMLINK libspdk_event_scsi.so 00:03:12.868 SYMLINK libspdk_event_ublk.so 00:03:12.869 SYMLINK libspdk_event_nbd.so 00:03:12.869 SO libspdk_event_nvmf.so.6.0 00:03:12.869 SYMLINK libspdk_event_nvmf.so 00:03:13.129 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.401 LIB libspdk_event_vhost_scsi.a 00:03:13.401 LIB libspdk_event_iscsi.a 00:03:13.401 SO libspdk_event_iscsi.so.6.0 00:03:13.401 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.401 SYMLINK libspdk_event_iscsi.so 00:03:13.401 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.677 SO libspdk.so.6.0 00:03:13.677 SYMLINK libspdk.so 00:03:13.937 CXX app/trace/trace.o 00:03:13.937 CC app/spdk_lspci/spdk_lspci.o 00:03:13.937 CC app/trace_record/trace_record.o 00:03:13.937 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.937 CC app/nvmf_tgt/nvmf_main.o 00:03:13.937 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.937 CC app/spdk_tgt/spdk_tgt.o 00:03:13.937 CC examples/ioat/perf/perf.o 00:03:13.937 CC examples/util/zipf/zipf.o 00:03:13.937 CC test/thread/poller_perf/poller_perf.o 00:03:13.937 LINK spdk_lspci 00:03:13.937 LINK interrupt_tgt 00:03:13.937 LINK nvmf_tgt 00:03:14.197 LINK zipf 00:03:14.197 LINK poller_perf 00:03:14.197 LINK spdk_trace_record 00:03:14.197 LINK iscsi_tgt 00:03:14.197 LINK spdk_tgt 00:03:14.197 LINK ioat_perf 00:03:14.197 LINK spdk_trace 00:03:14.197 CC app/spdk_nvme_perf/perf.o 00:03:14.457 CC app/spdk_nvme_identify/identify.o 00:03:14.457 CC examples/ioat/verify/verify.o 00:03:14.457 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.457 CC app/spdk_top/spdk_top.o 00:03:14.457 CC test/dma/test_dma/test_dma.o 00:03:14.457 CC app/spdk_dd/spdk_dd.o 00:03:14.457 CC examples/thread/thread/thread_ex.o 00:03:14.457 CC test/app/bdev_svc/bdev_svc.o 00:03:14.457 CC examples/sock/hello_world/hello_sock.o 00:03:14.717 LINK spdk_nvme_discover 00:03:14.717 LINK verify 00:03:14.717 LINK bdev_svc 00:03:14.717 LINK thread 00:03:14.717 LINK hello_sock 00:03:14.717 LINK spdk_dd 00:03:14.977 CC examples/idxd/perf/perf.o 00:03:14.977 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.977 LINK test_dma 00:03:14.977 CC test/app/histogram_perf/histogram_perf.o 00:03:14.977 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.977 LINK lsvmd 00:03:14.977 TEST_HEADER include/spdk/accel.h 00:03:14.977 TEST_HEADER include/spdk/accel_module.h 00:03:14.977 TEST_HEADER include/spdk/assert.h 00:03:14.977 TEST_HEADER include/spdk/barrier.h 00:03:14.977 TEST_HEADER include/spdk/base64.h 00:03:14.977 TEST_HEADER include/spdk/bdev.h 00:03:14.977 TEST_HEADER include/spdk/bdev_module.h 00:03:14.977 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.977 TEST_HEADER include/spdk/bit_array.h 00:03:14.977 TEST_HEADER include/spdk/bit_pool.h 00:03:14.977 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.977 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.977 TEST_HEADER include/spdk/blobfs.h 00:03:14.977 LINK histogram_perf 00:03:14.977 TEST_HEADER include/spdk/blob.h 00:03:14.977 TEST_HEADER include/spdk/conf.h 00:03:14.977 TEST_HEADER include/spdk/config.h 00:03:14.977 TEST_HEADER include/spdk/cpuset.h 00:03:14.977 TEST_HEADER include/spdk/crc16.h 00:03:14.977 TEST_HEADER include/spdk/crc32.h 00:03:14.977 TEST_HEADER include/spdk/crc64.h 00:03:14.977 TEST_HEADER include/spdk/dif.h 00:03:14.977 TEST_HEADER include/spdk/dma.h 00:03:15.237 TEST_HEADER include/spdk/endian.h 00:03:15.237 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.237 TEST_HEADER include/spdk/env.h 00:03:15.237 TEST_HEADER include/spdk/event.h 00:03:15.237 TEST_HEADER include/spdk/fd_group.h 00:03:15.237 TEST_HEADER include/spdk/fd.h 00:03:15.237 TEST_HEADER include/spdk/file.h 00:03:15.237 TEST_HEADER include/spdk/fsdev.h 00:03:15.237 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.237 TEST_HEADER include/spdk/ftl.h 00:03:15.237 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.237 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.237 TEST_HEADER include/spdk/hexlify.h 00:03:15.237 TEST_HEADER include/spdk/histogram_data.h 00:03:15.237 TEST_HEADER include/spdk/idxd.h 00:03:15.237 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.237 TEST_HEADER include/spdk/init.h 00:03:15.237 TEST_HEADER include/spdk/ioat.h 00:03:15.237 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.237 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.237 TEST_HEADER include/spdk/json.h 00:03:15.237 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.237 TEST_HEADER include/spdk/keyring.h 00:03:15.237 TEST_HEADER include/spdk/keyring_module.h 00:03:15.237 TEST_HEADER include/spdk/likely.h 00:03:15.237 TEST_HEADER include/spdk/log.h 00:03:15.237 TEST_HEADER include/spdk/lvol.h 00:03:15.237 TEST_HEADER include/spdk/md5.h 00:03:15.237 TEST_HEADER include/spdk/memory.h 00:03:15.237 TEST_HEADER include/spdk/mmio.h 00:03:15.237 TEST_HEADER include/spdk/nbd.h 00:03:15.237 TEST_HEADER include/spdk/net.h 00:03:15.237 TEST_HEADER include/spdk/notify.h 00:03:15.237 TEST_HEADER include/spdk/nvme.h 00:03:15.237 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.237 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.237 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.237 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.237 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.237 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.237 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.237 TEST_HEADER include/spdk/nvmf.h 00:03:15.237 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.237 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.238 TEST_HEADER include/spdk/opal.h 00:03:15.238 TEST_HEADER include/spdk/opal_spec.h 00:03:15.238 TEST_HEADER include/spdk/pci_ids.h 00:03:15.238 TEST_HEADER include/spdk/pipe.h 00:03:15.238 LINK spdk_nvme_perf 00:03:15.238 TEST_HEADER include/spdk/queue.h 00:03:15.238 TEST_HEADER include/spdk/reduce.h 00:03:15.238 TEST_HEADER include/spdk/rpc.h 00:03:15.238 TEST_HEADER include/spdk/scheduler.h 00:03:15.238 TEST_HEADER include/spdk/scsi.h 00:03:15.238 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.238 TEST_HEADER include/spdk/sock.h 00:03:15.238 TEST_HEADER include/spdk/stdinc.h 00:03:15.238 TEST_HEADER include/spdk/string.h 00:03:15.238 TEST_HEADER include/spdk/thread.h 00:03:15.238 TEST_HEADER include/spdk/trace.h 00:03:15.238 TEST_HEADER include/spdk/trace_parser.h 00:03:15.238 TEST_HEADER include/spdk/tree.h 00:03:15.238 TEST_HEADER include/spdk/ublk.h 00:03:15.238 TEST_HEADER include/spdk/util.h 00:03:15.238 TEST_HEADER include/spdk/uuid.h 00:03:15.238 TEST_HEADER include/spdk/version.h 00:03:15.238 CC test/env/vtophys/vtophys.o 00:03:15.238 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.238 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.238 TEST_HEADER include/spdk/vhost.h 00:03:15.238 TEST_HEADER include/spdk/vmd.h 00:03:15.238 TEST_HEADER include/spdk/xor.h 00:03:15.238 TEST_HEADER include/spdk/zipf.h 00:03:15.238 CXX test/cpp_headers/accel.o 00:03:15.238 LINK idxd_perf 00:03:15.238 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.238 CC examples/vmd/led/led.o 00:03:15.238 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.238 LINK spdk_nvme_identify 00:03:15.238 CXX test/cpp_headers/accel_module.o 00:03:15.238 LINK spdk_top 00:03:15.498 LINK vtophys 00:03:15.498 CXX test/cpp_headers/assert.o 00:03:15.498 LINK led 00:03:15.498 LINK nvme_fuzz 00:03:15.498 LINK env_dpdk_post_init 00:03:15.498 CXX test/cpp_headers/barrier.o 00:03:15.498 CXX test/cpp_headers/base64.o 00:03:15.498 CC app/fio/nvme/fio_plugin.o 00:03:15.498 CC test/env/memory/memory_ut.o 00:03:15.498 CC app/fio/bdev/fio_plugin.o 00:03:15.758 CXX test/cpp_headers/bdev.o 00:03:15.758 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.758 CC app/vhost/vhost.o 00:03:15.758 CC examples/accel/perf/accel_perf.o 00:03:15.758 CC examples/blob/hello_world/hello_blob.o 00:03:15.758 CC test/env/pci/pci_ut.o 00:03:15.758 LINK mem_callbacks 00:03:15.758 CXX test/cpp_headers/bdev_module.o 00:03:16.018 LINK vhost 00:03:16.018 LINK hello_blob 00:03:16.018 CXX test/cpp_headers/bdev_zone.o 00:03:16.018 CC examples/blob/cli/blobcli.o 00:03:16.018 LINK spdk_bdev 00:03:16.018 LINK spdk_nvme 00:03:16.018 CXX test/cpp_headers/bit_array.o 00:03:16.278 CXX test/cpp_headers/bit_pool.o 00:03:16.279 CXX test/cpp_headers/blob_bdev.o 00:03:16.279 LINK pci_ut 00:03:16.279 LINK accel_perf 00:03:16.279 CC test/event/event_perf/event_perf.o 00:03:16.279 CC test/nvme/aer/aer.o 00:03:16.279 CC test/nvme/reset/reset.o 00:03:16.279 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.539 CC test/event/reactor/reactor.o 00:03:16.539 LINK event_perf 00:03:16.539 LINK blobcli 00:03:16.539 CC test/nvme/sgl/sgl.o 00:03:16.539 CXX test/cpp_headers/blobfs.o 00:03:16.539 LINK reactor 00:03:16.539 CC test/event/reactor_perf/reactor_perf.o 00:03:16.539 LINK reset 00:03:16.539 LINK aer 00:03:16.539 LINK memory_ut 00:03:16.539 CXX test/cpp_headers/blob.o 00:03:16.799 CC test/event/app_repeat/app_repeat.o 00:03:16.799 LINK reactor_perf 00:03:16.799 CC examples/nvme/hello_world/hello_world.o 00:03:16.799 CC test/event/scheduler/scheduler.o 00:03:16.799 LINK sgl 00:03:16.799 CXX test/cpp_headers/conf.o 00:03:16.799 CXX test/cpp_headers/config.o 00:03:16.799 LINK app_repeat 00:03:16.799 CC test/rpc_client/rpc_client_test.o 00:03:16.799 CXX test/cpp_headers/cpuset.o 00:03:16.799 CC examples/nvme/reconnect/reconnect.o 00:03:17.059 LINK hello_world 00:03:17.059 LINK scheduler 00:03:17.059 CXX test/cpp_headers/crc16.o 00:03:17.059 CC test/accel/dif/dif.o 00:03:17.059 CC test/nvme/e2edp/nvme_dp.o 00:03:17.059 CC test/blobfs/mkfs/mkfs.o 00:03:17.059 LINK rpc_client_test 00:03:17.059 CXX test/cpp_headers/crc32.o 00:03:17.059 CXX test/cpp_headers/crc64.o 00:03:17.059 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.059 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.059 LINK mkfs 00:03:17.319 LINK nvme_dp 00:03:17.319 CC test/lvol/esnap/esnap.o 00:03:17.319 CXX test/cpp_headers/dif.o 00:03:17.319 LINK reconnect 00:03:17.319 CXX test/cpp_headers/dma.o 00:03:17.319 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.320 LINK iscsi_fuzz 00:03:17.320 CC test/nvme/err_injection/err_injection.o 00:03:17.320 CXX test/cpp_headers/endian.o 00:03:17.320 CC test/nvme/overhead/overhead.o 00:03:17.579 CC examples/nvme/arbitration/arbitration.o 00:03:17.579 CC examples/nvme/hotplug/hotplug.o 00:03:17.580 CXX test/cpp_headers/env_dpdk.o 00:03:17.580 LINK vhost_fuzz 00:03:17.580 LINK err_injection 00:03:17.580 CC test/nvme/startup/startup.o 00:03:17.580 CXX test/cpp_headers/env.o 00:03:17.580 LINK dif 00:03:17.840 LINK overhead 00:03:17.840 LINK hotplug 00:03:17.840 CC test/app/jsoncat/jsoncat.o 00:03:17.840 LINK arbitration 00:03:17.840 LINK nvme_manage 00:03:17.840 CXX test/cpp_headers/event.o 00:03:17.840 LINK startup 00:03:17.840 CXX test/cpp_headers/fd_group.o 00:03:17.840 CC test/nvme/reserve/reserve.o 00:03:17.840 CXX test/cpp_headers/fd.o 00:03:17.840 CXX test/cpp_headers/file.o 00:03:17.840 LINK jsoncat 00:03:17.840 CXX test/cpp_headers/fsdev.o 00:03:18.100 CXX test/cpp_headers/fsdev_module.o 00:03:18.100 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.100 CC examples/nvme/abort/abort.o 00:03:18.100 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.100 LINK reserve 00:03:18.100 CC test/app/stub/stub.o 00:03:18.100 CXX test/cpp_headers/ftl.o 00:03:18.100 CC test/nvme/simple_copy/simple_copy.o 00:03:18.100 CC test/bdev/bdevio/bdevio.o 00:03:18.100 LINK cmb_copy 00:03:18.360 LINK pmr_persistence 00:03:18.360 CXX test/cpp_headers/fuse_dispatcher.o 00:03:18.360 LINK stub 00:03:18.360 LINK simple_copy 00:03:18.360 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:18.360 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.360 CXX test/cpp_headers/gpt_spec.o 00:03:18.360 CXX test/cpp_headers/hexlify.o 00:03:18.360 LINK abort 00:03:18.360 CXX test/cpp_headers/histogram_data.o 00:03:18.621 CXX test/cpp_headers/idxd.o 00:03:18.621 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.621 LINK bdevio 00:03:18.621 CXX test/cpp_headers/idxd_spec.o 00:03:18.621 LINK hello_bdev 00:03:18.621 CC test/nvme/connect_stress/connect_stress.o 00:03:18.621 CC test/nvme/boot_partition/boot_partition.o 00:03:18.621 CC test/nvme/compliance/nvme_compliance.o 00:03:18.621 LINK hello_fsdev 00:03:18.621 CXX test/cpp_headers/init.o 00:03:18.621 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.621 CXX test/cpp_headers/ioat.o 00:03:18.621 CXX test/cpp_headers/ioat_spec.o 00:03:18.881 LINK boot_partition 00:03:18.881 LINK connect_stress 00:03:18.881 CXX test/cpp_headers/iscsi_spec.o 00:03:18.881 CXX test/cpp_headers/json.o 00:03:18.881 LINK fused_ordering 00:03:18.881 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.881 CXX test/cpp_headers/jsonrpc.o 00:03:18.881 CXX test/cpp_headers/keyring.o 00:03:18.881 CC test/nvme/fdp/fdp.o 00:03:19.141 CC test/nvme/cuse/cuse.o 00:03:19.141 CXX test/cpp_headers/keyring_module.o 00:03:19.141 LINK nvme_compliance 00:03:19.141 CXX test/cpp_headers/likely.o 00:03:19.141 CXX test/cpp_headers/log.o 00:03:19.141 CXX test/cpp_headers/lvol.o 00:03:19.141 LINK doorbell_aers 00:03:19.141 CXX test/cpp_headers/md5.o 00:03:19.141 CXX test/cpp_headers/memory.o 00:03:19.141 CXX test/cpp_headers/mmio.o 00:03:19.141 CXX test/cpp_headers/nbd.o 00:03:19.141 CXX test/cpp_headers/net.o 00:03:19.401 CXX test/cpp_headers/notify.o 00:03:19.401 CXX test/cpp_headers/nvme.o 00:03:19.401 LINK bdevperf 00:03:19.401 CXX test/cpp_headers/nvme_intel.o 00:03:19.401 LINK fdp 00:03:19.401 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.401 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.401 CXX test/cpp_headers/nvme_spec.o 00:03:19.401 CXX test/cpp_headers/nvme_zns.o 00:03:19.401 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.401 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.401 CXX test/cpp_headers/nvmf.o 00:03:19.401 CXX test/cpp_headers/nvmf_spec.o 00:03:19.661 CXX test/cpp_headers/nvmf_transport.o 00:03:19.661 CXX test/cpp_headers/opal.o 00:03:19.661 CXX test/cpp_headers/opal_spec.o 00:03:19.661 CXX test/cpp_headers/pci_ids.o 00:03:19.661 CXX test/cpp_headers/pipe.o 00:03:19.661 CXX test/cpp_headers/queue.o 00:03:19.661 CC examples/nvmf/nvmf/nvmf.o 00:03:19.661 CXX test/cpp_headers/reduce.o 00:03:19.661 CXX test/cpp_headers/rpc.o 00:03:19.661 CXX test/cpp_headers/scheduler.o 00:03:19.661 CXX test/cpp_headers/scsi.o 00:03:19.661 CXX test/cpp_headers/scsi_spec.o 00:03:19.661 CXX test/cpp_headers/sock.o 00:03:19.661 CXX test/cpp_headers/stdinc.o 00:03:19.922 CXX test/cpp_headers/string.o 00:03:19.922 CXX test/cpp_headers/thread.o 00:03:19.922 CXX test/cpp_headers/trace.o 00:03:19.922 CXX test/cpp_headers/trace_parser.o 00:03:19.922 CXX test/cpp_headers/tree.o 00:03:19.922 CXX test/cpp_headers/ublk.o 00:03:19.922 CXX test/cpp_headers/util.o 00:03:19.922 CXX test/cpp_headers/uuid.o 00:03:19.922 LINK nvmf 00:03:19.922 CXX test/cpp_headers/version.o 00:03:19.922 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.922 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.922 CXX test/cpp_headers/vhost.o 00:03:19.922 CXX test/cpp_headers/vmd.o 00:03:20.182 CXX test/cpp_headers/xor.o 00:03:20.182 CXX test/cpp_headers/zipf.o 00:03:20.182 LINK cuse 00:03:23.480 LINK esnap 00:03:23.480 00:03:23.480 real 1m22.040s 00:03:23.480 user 7m3.670s 00:03:23.480 sys 1m38.424s 00:03:23.480 17:44:41 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:23.480 17:44:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.480 ************************************ 00:03:23.480 END TEST make 00:03:23.480 ************************************ 00:03:23.480 17:44:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.480 17:44:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.480 17:44:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.480 17:44:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.480 17:44:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.480 17:44:41 -- pm/common@44 -- $ pid=5450 00:03:23.480 17:44:41 -- pm/common@50 -- $ kill -TERM 5450 00:03:23.480 17:44:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.480 17:44:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.480 17:44:41 -- pm/common@44 -- $ pid=5452 00:03:23.480 17:44:41 -- pm/common@50 -- $ kill -TERM 5452 00:03:23.740 17:44:41 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:23.740 17:44:41 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:23.740 17:44:41 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:23.740 17:44:42 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:23.740 17:44:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.741 17:44:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.741 17:44:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.741 17:44:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.741 17:44:42 -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.741 17:44:42 -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.741 17:44:42 -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.741 17:44:42 -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.741 17:44:42 -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.741 17:44:42 -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.741 17:44:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.741 17:44:42 -- scripts/common.sh@344 -- # case "$op" in 00:03:23.741 17:44:42 -- scripts/common.sh@345 -- # : 1 00:03:23.741 17:44:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.741 17:44:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.741 17:44:42 -- scripts/common.sh@365 -- # decimal 1 00:03:23.741 17:44:42 -- scripts/common.sh@353 -- # local d=1 00:03:23.741 17:44:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.741 17:44:42 -- scripts/common.sh@355 -- # echo 1 00:03:23.741 17:44:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.741 17:44:42 -- scripts/common.sh@366 -- # decimal 2 00:03:23.741 17:44:42 -- scripts/common.sh@353 -- # local d=2 00:03:23.741 17:44:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.741 17:44:42 -- scripts/common.sh@355 -- # echo 2 00:03:23.741 17:44:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.741 17:44:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.741 17:44:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.741 17:44:42 -- scripts/common.sh@368 -- # return 0 00:03:23.741 17:44:42 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.741 17:44:42 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.741 --rc genhtml_branch_coverage=1 00:03:23.741 --rc genhtml_function_coverage=1 00:03:23.741 --rc genhtml_legend=1 00:03:23.741 --rc geninfo_all_blocks=1 00:03:23.741 --rc geninfo_unexecuted_blocks=1 00:03:23.741 00:03:23.741 ' 00:03:23.741 17:44:42 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.741 --rc genhtml_branch_coverage=1 00:03:23.741 --rc genhtml_function_coverage=1 00:03:23.741 --rc genhtml_legend=1 00:03:23.741 --rc geninfo_all_blocks=1 00:03:23.741 --rc geninfo_unexecuted_blocks=1 00:03:23.741 00:03:23.741 ' 00:03:23.741 17:44:42 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.741 --rc genhtml_branch_coverage=1 00:03:23.741 --rc genhtml_function_coverage=1 00:03:23.741 --rc genhtml_legend=1 00:03:23.741 --rc geninfo_all_blocks=1 00:03:23.741 --rc geninfo_unexecuted_blocks=1 00:03:23.741 00:03:23.741 ' 00:03:23.741 17:44:42 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.741 --rc genhtml_branch_coverage=1 00:03:23.741 --rc genhtml_function_coverage=1 00:03:23.741 --rc genhtml_legend=1 00:03:23.741 --rc geninfo_all_blocks=1 00:03:23.741 --rc geninfo_unexecuted_blocks=1 00:03:23.741 00:03:23.741 ' 00:03:23.741 17:44:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:23.741 17:44:42 -- nvmf/common.sh@7 -- # uname -s 00:03:23.741 17:44:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.741 17:44:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.741 17:44:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.741 17:44:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.741 17:44:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.741 17:44:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.741 17:44:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.741 17:44:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.741 17:44:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.741 17:44:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.741 17:44:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:991536d8-8d7e-47ec-ad25-340c17aae998 00:03:23.741 17:44:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=991536d8-8d7e-47ec-ad25-340c17aae998 00:03:23.741 17:44:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.741 17:44:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.741 17:44:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:23.741 17:44:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.741 17:44:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.741 17:44:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:23.741 17:44:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.741 17:44:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.741 17:44:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.741 17:44:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.741 17:44:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.741 17:44:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.741 17:44:42 -- paths/export.sh@5 -- # export PATH 00:03:23.741 17:44:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.741 17:44:42 -- nvmf/common.sh@51 -- # : 0 00:03:23.741 17:44:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:23.741 17:44:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:23.741 17:44:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.741 17:44:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.741 17:44:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.741 17:44:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:23.741 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:23.741 17:44:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:23.741 17:44:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:23.741 17:44:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:23.741 17:44:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.741 17:44:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.741 17:44:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.741 17:44:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.741 17:44:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.741 17:44:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.741 17:44:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.741 17:44:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.741 17:44:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.741 17:44:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.741 17:44:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54408 00:03:23.741 17:44:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.741 17:44:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.741 17:44:42 -- pm/common@17 -- # local monitor 00:03:23.741 17:44:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.741 17:44:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.741 17:44:42 -- pm/common@25 -- # sleep 1 00:03:23.741 17:44:42 -- pm/common@21 -- # date +%s 00:03:23.741 17:44:42 -- pm/common@21 -- # date +%s 00:03:23.741 17:44:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729878282 00:03:23.741 17:44:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729878282 00:03:24.001 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729878282_collect-cpu-load.pm.log 00:03:24.001 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729878282_collect-vmstat.pm.log 00:03:24.942 17:44:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.942 17:44:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:24.942 17:44:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.942 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:24.942 17:44:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:24.942 17:44:43 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:24.942 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:24.942 17:44:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.942 17:44:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:24.942 17:44:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:24.942 17:44:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:24.942 17:44:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:24.942 17:44:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.942 17:44:43 -- common/autotest_common.sh@1453 -- # uname 00:03:24.942 17:44:43 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:24.942 17:44:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.942 17:44:43 -- common/autotest_common.sh@1473 -- # uname 00:03:24.942 17:44:43 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:24.942 17:44:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:24.942 17:44:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:24.942 lcov: LCOV version 1.15 00:03:24.942 17:44:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:39.851 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.959 17:45:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:57.959 17:45:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.959 17:45:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.959 17:45:13 -- spdk/autotest.sh@78 -- # rm -f 00:03:57.959 17:45:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.959 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:57.959 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:57.959 17:45:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:57.959 17:45:14 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:03:57.959 17:45:14 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:03:57.959 17:45:14 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:03:57.959 17:45:14 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:03:57.959 17:45:14 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:03:57.959 17:45:14 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:03:57.959 17:45:14 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.959 17:45:14 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:03:57.959 17:45:14 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:03:57.960 17:45:14 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:03:57.960 17:45:14 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:03:57.960 17:45:14 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:57.960 17:45:14 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:03:57.960 17:45:14 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:03:57.960 17:45:14 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n2 00:03:57.960 17:45:14 -- common/autotest_common.sh@1646 -- # local device=nvme1n2 00:03:57.960 17:45:14 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:57.960 17:45:14 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:03:57.960 17:45:14 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:03:57.960 17:45:14 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n3 00:03:57.960 17:45:14 -- common/autotest_common.sh@1646 -- # local device=nvme1n3 00:03:57.960 17:45:14 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:57.960 17:45:14 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:03:57.960 17:45:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:57.960 17:45:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.960 17:45:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.960 17:45:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:57.960 17:45:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:57.960 17:45:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:57.960 No valid GPT data, bailing 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # pt= 00:03:57.960 17:45:14 -- scripts/common.sh@395 -- # return 1 00:03:57.960 17:45:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:57.960 1+0 records in 00:03:57.960 1+0 records out 00:03:57.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670324 s, 156 MB/s 00:03:57.960 17:45:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.960 17:45:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.960 17:45:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:57.960 17:45:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:57.960 17:45:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:57.960 No valid GPT data, bailing 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # pt= 00:03:57.960 17:45:14 -- scripts/common.sh@395 -- # return 1 00:03:57.960 17:45:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:57.960 1+0 records in 00:03:57.960 1+0 records out 00:03:57.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506579 s, 207 MB/s 00:03:57.960 17:45:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.960 17:45:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.960 17:45:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:57.960 17:45:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:57.960 17:45:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:57.960 No valid GPT data, bailing 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # pt= 00:03:57.960 17:45:14 -- scripts/common.sh@395 -- # return 1 00:03:57.960 17:45:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:57.960 1+0 records in 00:03:57.960 1+0 records out 00:03:57.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610072 s, 172 MB/s 00:03:57.960 17:45:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.960 17:45:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.960 17:45:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:57.960 17:45:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:57.960 17:45:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:57.960 No valid GPT data, bailing 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:57.960 17:45:14 -- scripts/common.sh@394 -- # pt= 00:03:57.960 17:45:14 -- scripts/common.sh@395 -- # return 1 00:03:57.960 17:45:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:57.960 1+0 records in 00:03:57.960 1+0 records out 00:03:57.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621895 s, 169 MB/s 00:03:57.960 17:45:14 -- spdk/autotest.sh@105 -- # sync 00:03:57.960 17:45:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:57.960 17:45:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:57.960 17:45:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:59.342 17:45:17 -- spdk/autotest.sh@111 -- # uname -s 00:03:59.342 17:45:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:59.342 17:45:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:59.342 17:45:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.283 Hugepages 00:04:00.283 node hugesize free / total 00:04:00.283 node0 1048576kB 0 / 0 00:04:00.283 node0 2048kB 0 / 0 00:04:00.283 00:04:00.283 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.283 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.543 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:00.543 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:00.543 17:45:18 -- spdk/autotest.sh@117 -- # uname -s 00:04:00.543 17:45:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:00.543 17:45:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:00.543 17:45:18 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.744 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.744 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.744 17:45:20 -- common/autotest_common.sh@1513 -- # sleep 1 00:04:02.684 17:45:21 -- common/autotest_common.sh@1514 -- # bdfs=() 00:04:02.684 17:45:21 -- common/autotest_common.sh@1514 -- # local bdfs 00:04:02.684 17:45:21 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.684 17:45:21 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:04:02.684 17:45:21 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:02.684 17:45:21 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:02.684 17:45:21 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.684 17:45:21 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.684 17:45:21 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:02.944 17:45:21 -- common/autotest_common.sh@1496 -- # (( 2 == 0 )) 00:04:02.944 17:45:21 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.944 17:45:21 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.465 Waiting for block devices as requested 00:04:03.465 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.465 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.725 17:45:21 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:03.725 17:45:21 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.725 17:45:21 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.725 17:45:21 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.725 17:45:21 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:04:03.725 17:45:21 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:03.725 17:45:21 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:04:03.725 17:45:21 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:03.725 17:45:21 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:03.725 17:45:21 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:03.725 17:45:21 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:03.725 17:45:21 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:04:03.726 17:45:21 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:03.726 17:45:21 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:03.726 17:45:21 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:03.726 17:45:21 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:03.726 17:45:21 -- common/autotest_common.sh@1539 -- # continue 00:04:03.726 17:45:21 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:03.726 17:45:21 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.726 17:45:21 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.726 17:45:21 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.726 17:45:21 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:04:03.726 17:45:21 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:04:03.726 17:45:21 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:03.726 17:45:21 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:03.726 17:45:22 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:03.726 17:45:22 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:03.726 17:45:22 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:03.726 17:45:22 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:04:03.726 17:45:22 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:03.726 17:45:22 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:03.726 17:45:22 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:03.726 17:45:22 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:03.726 17:45:22 -- common/autotest_common.sh@1539 -- # continue 00:04:03.726 17:45:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:03.726 17:45:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.726 17:45:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.726 17:45:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.726 17:45:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.726 17:45:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.726 17:45:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.666 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.666 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.927 17:45:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:04.927 17:45:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.927 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:04.927 17:45:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:04.927 17:45:23 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:04.927 17:45:23 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.927 17:45:23 -- common/autotest_common.sh@1559 -- # bdfs=() 00:04:04.927 17:45:23 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:04:04.927 17:45:23 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:04:04.927 17:45:23 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:04:04.927 17:45:23 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:04:04.927 17:45:23 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:04.927 17:45:23 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:04.927 17:45:23 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.927 17:45:23 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.927 17:45:23 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:04.927 17:45:23 -- common/autotest_common.sh@1496 -- # (( 2 == 0 )) 00:04:04.927 17:45:23 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.927 17:45:23 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:04.927 17:45:23 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.927 17:45:23 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:04.927 17:45:23 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.927 17:45:23 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:04.927 17:45:23 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.927 17:45:23 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:04.927 17:45:23 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.927 17:45:23 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:04:04.927 17:45:23 -- common/autotest_common.sh@1568 -- # return 0 00:04:04.927 17:45:23 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:04.927 17:45:23 -- common/autotest_common.sh@1576 -- # return 0 00:04:04.927 17:45:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:04.927 17:45:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:04.927 17:45:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.927 17:45:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.927 17:45:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:04.927 17:45:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.927 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:04.927 17:45:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:04.927 17:45:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.927 17:45:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.927 17:45:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.927 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:04.927 ************************************ 00:04:04.927 START TEST env 00:04:04.927 ************************************ 00:04:04.927 17:45:23 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.191 * Looking for test storage... 00:04:05.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1689 -- # lcov --version 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:05.191 17:45:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.191 17:45:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.191 17:45:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.191 17:45:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.191 17:45:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.191 17:45:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.191 17:45:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.191 17:45:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.191 17:45:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.191 17:45:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.191 17:45:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.191 17:45:23 env -- scripts/common.sh@344 -- # case "$op" in 00:04:05.191 17:45:23 env -- scripts/common.sh@345 -- # : 1 00:04:05.191 17:45:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.191 17:45:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.191 17:45:23 env -- scripts/common.sh@365 -- # decimal 1 00:04:05.191 17:45:23 env -- scripts/common.sh@353 -- # local d=1 00:04:05.191 17:45:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.191 17:45:23 env -- scripts/common.sh@355 -- # echo 1 00:04:05.191 17:45:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.191 17:45:23 env -- scripts/common.sh@366 -- # decimal 2 00:04:05.191 17:45:23 env -- scripts/common.sh@353 -- # local d=2 00:04:05.191 17:45:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.191 17:45:23 env -- scripts/common.sh@355 -- # echo 2 00:04:05.191 17:45:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.191 17:45:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.191 17:45:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.191 17:45:23 env -- scripts/common.sh@368 -- # return 0 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.191 --rc genhtml_branch_coverage=1 00:04:05.191 --rc genhtml_function_coverage=1 00:04:05.191 --rc genhtml_legend=1 00:04:05.191 --rc geninfo_all_blocks=1 00:04:05.191 --rc geninfo_unexecuted_blocks=1 00:04:05.191 00:04:05.191 ' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.191 --rc genhtml_branch_coverage=1 00:04:05.191 --rc genhtml_function_coverage=1 00:04:05.191 --rc genhtml_legend=1 00:04:05.191 --rc geninfo_all_blocks=1 00:04:05.191 --rc geninfo_unexecuted_blocks=1 00:04:05.191 00:04:05.191 ' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.191 --rc genhtml_branch_coverage=1 00:04:05.191 --rc genhtml_function_coverage=1 00:04:05.191 --rc genhtml_legend=1 00:04:05.191 --rc geninfo_all_blocks=1 00:04:05.191 --rc geninfo_unexecuted_blocks=1 00:04:05.191 00:04:05.191 ' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.191 --rc genhtml_branch_coverage=1 00:04:05.191 --rc genhtml_function_coverage=1 00:04:05.191 --rc genhtml_legend=1 00:04:05.191 --rc geninfo_all_blocks=1 00:04:05.191 --rc geninfo_unexecuted_blocks=1 00:04:05.191 00:04:05.191 ' 00:04:05.191 17:45:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.191 17:45:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.191 17:45:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.191 ************************************ 00:04:05.191 START TEST env_memory 00:04:05.191 ************************************ 00:04:05.191 17:45:23 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.191 00:04:05.191 00:04:05.191 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.191 http://cunit.sourceforge.net/ 00:04:05.191 00:04:05.191 00:04:05.191 Suite: memory 00:04:05.459 Test: alloc and free memory map ...[2024-10-25 17:45:23.662066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.459 passed 00:04:05.459 Test: mem map translation ...[2024-10-25 17:45:23.703982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.459 [2024-10-25 17:45:23.704019] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.459 [2024-10-25 17:45:23.704077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.459 [2024-10-25 17:45:23.704110] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.459 passed 00:04:05.459 Test: mem map registration ...[2024-10-25 17:45:23.768981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:05.459 [2024-10-25 17:45:23.769017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:05.460 passed 00:04:05.460 Test: mem map adjacent registrations ...passed 00:04:05.460 00:04:05.460 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.460 suites 1 1 n/a 0 0 00:04:05.460 tests 4 4 4 0 0 00:04:05.460 asserts 152 152 152 0 n/a 00:04:05.460 00:04:05.460 Elapsed time = 0.231 seconds 00:04:05.460 00:04:05.460 real 0m0.288s 00:04:05.460 user 0m0.249s 00:04:05.460 sys 0m0.028s 00:04:05.460 17:45:23 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.460 17:45:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.460 ************************************ 00:04:05.460 END TEST env_memory 00:04:05.460 ************************************ 00:04:05.724 17:45:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.724 17:45:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.724 17:45:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.724 17:45:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.724 ************************************ 00:04:05.724 START TEST env_vtophys 00:04:05.724 ************************************ 00:04:05.724 17:45:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.724 EAL: lib.eal log level changed from notice to debug 00:04:05.724 EAL: Detected lcore 0 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 1 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 2 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 3 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 4 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 5 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 6 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 7 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 8 as core 0 on socket 0 00:04:05.724 EAL: Detected lcore 9 as core 0 on socket 0 00:04:05.724 EAL: Maximum logical cores by configuration: 128 00:04:05.724 EAL: Detected CPU lcores: 10 00:04:05.724 EAL: Detected NUMA nodes: 1 00:04:05.724 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.724 EAL: Detected shared linkage of DPDK 00:04:05.724 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.724 EAL: Selected IOVA mode 'PA' 00:04:05.724 EAL: Probing VFIO support... 00:04:05.724 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.724 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:05.724 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.724 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.724 EAL: Setting up physically contiguous memory... 00:04:05.724 EAL: Setting maximum number of open files to 524288 00:04:05.724 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.724 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.724 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.724 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.724 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.724 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.724 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.724 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.724 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.724 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.724 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.724 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.724 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.724 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.725 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.725 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.725 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.725 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.725 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.725 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.725 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.725 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.725 EAL: Hugepages will be freed exactly as allocated. 00:04:05.725 EAL: No shared files mode enabled, IPC is disabled 00:04:05.725 EAL: No shared files mode enabled, IPC is disabled 00:04:05.725 EAL: TSC frequency is ~2290000 KHz 00:04:05.725 EAL: Main lcore 0 is ready (tid=7faaa4c0da40;cpuset=[0]) 00:04:05.725 EAL: Trying to obtain current memory policy. 00:04:05.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.725 EAL: Restoring previous memory policy: 0 00:04:05.725 EAL: request: mp_malloc_sync 00:04:05.725 EAL: No shared files mode enabled, IPC is disabled 00:04:05.725 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.725 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.984 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.984 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.984 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.984 00:04:05.984 00:04:05.984 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.984 http://cunit.sourceforge.net/ 00:04:05.984 00:04:05.984 00:04:05.984 Suite: components_suite 00:04:06.244 Test: vtophys_malloc_test ...passed 00:04:06.244 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.244 EAL: Restoring previous memory policy: 4 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.244 EAL: Trying to obtain current memory policy. 00:04:06.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.244 EAL: Restoring previous memory policy: 4 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.244 EAL: Trying to obtain current memory policy. 00:04:06.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.244 EAL: Restoring previous memory policy: 4 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.244 EAL: Trying to obtain current memory policy. 00:04:06.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.244 EAL: Restoring previous memory policy: 4 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.244 EAL: Trying to obtain current memory policy. 00:04:06.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.244 EAL: Restoring previous memory policy: 4 00:04:06.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.244 EAL: request: mp_malloc_sync 00:04:06.244 EAL: No shared files mode enabled, IPC is disabled 00:04:06.244 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.505 EAL: request: mp_malloc_sync 00:04:06.505 EAL: No shared files mode enabled, IPC is disabled 00:04:06.505 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.505 EAL: Trying to obtain current memory policy. 00:04:06.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.505 EAL: Restoring previous memory policy: 4 00:04:06.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.505 EAL: request: mp_malloc_sync 00:04:06.505 EAL: No shared files mode enabled, IPC is disabled 00:04:06.505 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.505 EAL: request: mp_malloc_sync 00:04:06.505 EAL: No shared files mode enabled, IPC is disabled 00:04:06.505 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.765 EAL: Trying to obtain current memory policy. 00:04:06.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.765 EAL: Restoring previous memory policy: 4 00:04:06.765 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.765 EAL: request: mp_malloc_sync 00:04:06.765 EAL: No shared files mode enabled, IPC is disabled 00:04:06.765 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.765 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.025 EAL: request: mp_malloc_sync 00:04:07.025 EAL: No shared files mode enabled, IPC is disabled 00:04:07.025 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.025 EAL: Trying to obtain current memory policy. 00:04:07.025 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.285 EAL: Restoring previous memory policy: 4 00:04:07.285 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.285 EAL: request: mp_malloc_sync 00:04:07.285 EAL: No shared files mode enabled, IPC is disabled 00:04:07.285 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.545 EAL: request: mp_malloc_sync 00:04:07.545 EAL: No shared files mode enabled, IPC is disabled 00:04:07.545 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.115 EAL: Trying to obtain current memory policy. 00:04:08.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.115 EAL: Restoring previous memory policy: 4 00:04:08.115 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.115 EAL: request: mp_malloc_sync 00:04:08.115 EAL: No shared files mode enabled, IPC is disabled 00:04:08.115 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.057 EAL: request: mp_malloc_sync 00:04:09.057 EAL: No shared files mode enabled, IPC is disabled 00:04:09.057 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.997 EAL: Trying to obtain current memory policy. 00:04:09.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.997 EAL: Restoring previous memory policy: 4 00:04:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.997 EAL: request: mp_malloc_sync 00:04:09.997 EAL: No shared files mode enabled, IPC is disabled 00:04:09.997 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.905 EAL: request: mp_malloc_sync 00:04:11.905 EAL: No shared files mode enabled, IPC is disabled 00:04:11.905 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.814 passed 00:04:13.814 00:04:13.814 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.814 suites 1 1 n/a 0 0 00:04:13.814 tests 2 2 2 0 0 00:04:13.814 asserts 5761 5761 5761 0 n/a 00:04:13.814 00:04:13.814 Elapsed time = 7.495 seconds 00:04:13.814 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.814 EAL: request: mp_malloc_sync 00:04:13.814 EAL: No shared files mode enabled, IPC is disabled 00:04:13.814 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.814 EAL: No shared files mode enabled, IPC is disabled 00:04:13.814 EAL: No shared files mode enabled, IPC is disabled 00:04:13.814 EAL: No shared files mode enabled, IPC is disabled 00:04:13.814 00:04:13.814 real 0m7.825s 00:04:13.814 user 0m6.887s 00:04:13.814 sys 0m0.793s 00:04:13.814 17:45:31 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.814 17:45:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.814 ************************************ 00:04:13.814 END TEST env_vtophys 00:04:13.814 ************************************ 00:04:13.814 17:45:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.814 17:45:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.814 17:45:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.814 17:45:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.814 ************************************ 00:04:13.814 START TEST env_pci 00:04:13.814 ************************************ 00:04:13.814 17:45:31 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.814 00:04:13.814 00:04:13.814 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.814 http://cunit.sourceforge.net/ 00:04:13.814 00:04:13.814 00:04:13.814 Suite: pci 00:04:13.814 Test: pci_hook ...[2024-10-25 17:45:31.886754] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56704 has claimed it 00:04:13.814 passed 00:04:13.814 00:04:13.814 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.814 suites 1 1 n/a 0 0 00:04:13.814 tests 1 1 1 0 0 00:04:13.814 asserts 25 25 25 0 n/a 00:04:13.814 00:04:13.814 Elapsed time = 0.011 seconds 00:04:13.814 EAL: Cannot find device (10000:00:01.0) 00:04:13.814 EAL: Failed to attach device on primary process 00:04:13.814 00:04:13.814 real 0m0.097s 00:04:13.814 user 0m0.042s 00:04:13.814 sys 0m0.054s 00:04:13.814 17:45:31 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.814 17:45:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.814 ************************************ 00:04:13.814 END TEST env_pci 00:04:13.814 ************************************ 00:04:13.814 17:45:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.814 17:45:31 env -- env/env.sh@15 -- # uname 00:04:13.814 17:45:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.814 17:45:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.814 17:45:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.814 17:45:32 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:13.814 17:45:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.814 17:45:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.814 ************************************ 00:04:13.814 START TEST env_dpdk_post_init 00:04:13.814 ************************************ 00:04:13.814 17:45:32 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.814 EAL: Detected CPU lcores: 10 00:04:13.814 EAL: Detected NUMA nodes: 1 00:04:13.814 EAL: Detected shared linkage of DPDK 00:04:13.814 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.814 EAL: Selected IOVA mode 'PA' 00:04:13.814 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.814 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:14.074 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:14.074 Starting DPDK initialization... 00:04:14.074 Starting SPDK post initialization... 00:04:14.074 SPDK NVMe probe 00:04:14.074 Attaching to 0000:00:10.0 00:04:14.074 Attaching to 0000:00:11.0 00:04:14.074 Attached to 0000:00:10.0 00:04:14.074 Attached to 0000:00:11.0 00:04:14.074 Cleaning up... 00:04:14.074 00:04:14.074 real 0m0.277s 00:04:14.074 user 0m0.084s 00:04:14.074 sys 0m0.092s 00:04:14.074 17:45:32 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.074 ************************************ 00:04:14.074 END TEST env_dpdk_post_init 00:04:14.074 ************************************ 00:04:14.074 17:45:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.074 17:45:32 env -- env/env.sh@26 -- # uname 00:04:14.074 17:45:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.074 17:45:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.074 17:45:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.074 17:45:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.074 17:45:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.074 ************************************ 00:04:14.074 START TEST env_mem_callbacks 00:04:14.074 ************************************ 00:04:14.074 17:45:32 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.074 EAL: Detected CPU lcores: 10 00:04:14.074 EAL: Detected NUMA nodes: 1 00:04:14.074 EAL: Detected shared linkage of DPDK 00:04:14.074 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.074 EAL: Selected IOVA mode 'PA' 00:04:14.334 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.334 00:04:14.334 00:04:14.334 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.334 http://cunit.sourceforge.net/ 00:04:14.334 00:04:14.334 00:04:14.334 Suite: memory 00:04:14.334 Test: test ... 00:04:14.334 register 0x200000200000 2097152 00:04:14.334 malloc 3145728 00:04:14.334 register 0x200000400000 4194304 00:04:14.334 buf 0x2000004fffc0 len 3145728 PASSED 00:04:14.334 malloc 64 00:04:14.334 buf 0x2000004ffec0 len 64 PASSED 00:04:14.334 malloc 4194304 00:04:14.334 register 0x200000800000 6291456 00:04:14.334 buf 0x2000009fffc0 len 4194304 PASSED 00:04:14.334 free 0x2000004fffc0 3145728 00:04:14.334 free 0x2000004ffec0 64 00:04:14.334 unregister 0x200000400000 4194304 PASSED 00:04:14.334 free 0x2000009fffc0 4194304 00:04:14.334 unregister 0x200000800000 6291456 PASSED 00:04:14.334 malloc 8388608 00:04:14.334 register 0x200000400000 10485760 00:04:14.334 buf 0x2000005fffc0 len 8388608 PASSED 00:04:14.334 free 0x2000005fffc0 8388608 00:04:14.334 unregister 0x200000400000 10485760 PASSED 00:04:14.334 passed 00:04:14.334 00:04:14.334 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.334 suites 1 1 n/a 0 0 00:04:14.334 tests 1 1 1 0 0 00:04:14.334 asserts 15 15 15 0 n/a 00:04:14.334 00:04:14.334 Elapsed time = 0.078 seconds 00:04:14.334 00:04:14.334 real 0m0.272s 00:04:14.334 user 0m0.101s 00:04:14.334 sys 0m0.069s 00:04:14.334 17:45:32 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.334 17:45:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.334 ************************************ 00:04:14.334 END TEST env_mem_callbacks 00:04:14.334 ************************************ 00:04:14.334 00:04:14.334 real 0m9.351s 00:04:14.334 user 0m7.597s 00:04:14.334 sys 0m1.412s 00:04:14.334 17:45:32 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.334 17:45:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.334 ************************************ 00:04:14.334 END TEST env 00:04:14.334 ************************************ 00:04:14.334 17:45:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.334 17:45:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.334 17:45:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.335 17:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:14.335 ************************************ 00:04:14.335 START TEST rpc 00:04:14.335 ************************************ 00:04:14.335 17:45:32 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.595 * Looking for test storage... 00:04:14.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.595 17:45:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.595 17:45:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.595 17:45:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.595 17:45:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.595 17:45:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.595 17:45:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.595 17:45:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.595 17:45:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.595 17:45:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.595 17:45:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.595 17:45:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.595 17:45:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.595 17:45:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.595 17:45:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.595 17:45:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:14.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.595 --rc genhtml_branch_coverage=1 00:04:14.595 --rc genhtml_function_coverage=1 00:04:14.595 --rc genhtml_legend=1 00:04:14.595 --rc geninfo_all_blocks=1 00:04:14.595 --rc geninfo_unexecuted_blocks=1 00:04:14.595 00:04:14.595 ' 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:14.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.595 --rc genhtml_branch_coverage=1 00:04:14.595 --rc genhtml_function_coverage=1 00:04:14.595 --rc genhtml_legend=1 00:04:14.595 --rc geninfo_all_blocks=1 00:04:14.595 --rc geninfo_unexecuted_blocks=1 00:04:14.595 00:04:14.595 ' 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:14.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.595 --rc genhtml_branch_coverage=1 00:04:14.595 --rc genhtml_function_coverage=1 00:04:14.595 --rc genhtml_legend=1 00:04:14.595 --rc geninfo_all_blocks=1 00:04:14.595 --rc geninfo_unexecuted_blocks=1 00:04:14.595 00:04:14.595 ' 00:04:14.595 17:45:32 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:14.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.595 --rc genhtml_branch_coverage=1 00:04:14.595 --rc genhtml_function_coverage=1 00:04:14.596 --rc genhtml_legend=1 00:04:14.596 --rc geninfo_all_blocks=1 00:04:14.596 --rc geninfo_unexecuted_blocks=1 00:04:14.596 00:04:14.596 ' 00:04:14.596 17:45:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:14.596 17:45:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56831 00:04:14.596 17:45:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.596 17:45:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56831 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@831 -- # '[' -z 56831 ']' 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.596 17:45:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.856 [2024-10-25 17:45:33.096809] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:14.856 [2024-10-25 17:45:33.096952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56831 ] 00:04:14.856 [2024-10-25 17:45:33.276441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.116 [2024-10-25 17:45:33.382494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.116 [2024-10-25 17:45:33.382554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56831' to capture a snapshot of events at runtime. 00:04:15.116 [2024-10-25 17:45:33.382564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.116 [2024-10-25 17:45:33.382589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.116 [2024-10-25 17:45:33.382596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56831 for offline analysis/debug. 00:04:15.116 [2024-10-25 17:45:33.383784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.057 17:45:34 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.057 17:45:34 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:16.057 17:45:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.057 17:45:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.057 17:45:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.057 17:45:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.057 17:45:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.057 17:45:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.057 17:45:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.057 ************************************ 00:04:16.057 START TEST rpc_integrity 00:04:16.057 ************************************ 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.057 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.057 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.058 { 00:04:16.058 "name": "Malloc0", 00:04:16.058 "aliases": [ 00:04:16.058 "f6e57110-49de-48e5-8b10-e2548bf4f0ba" 00:04:16.058 ], 00:04:16.058 "product_name": "Malloc disk", 00:04:16.058 "block_size": 512, 00:04:16.058 "num_blocks": 16384, 00:04:16.058 "uuid": "f6e57110-49de-48e5-8b10-e2548bf4f0ba", 00:04:16.058 "assigned_rate_limits": { 00:04:16.058 "rw_ios_per_sec": 0, 00:04:16.058 "rw_mbytes_per_sec": 0, 00:04:16.058 "r_mbytes_per_sec": 0, 00:04:16.058 "w_mbytes_per_sec": 0 00:04:16.058 }, 00:04:16.058 "claimed": false, 00:04:16.058 "zoned": false, 00:04:16.058 "supported_io_types": { 00:04:16.058 "read": true, 00:04:16.058 "write": true, 00:04:16.058 "unmap": true, 00:04:16.058 "flush": true, 00:04:16.058 "reset": true, 00:04:16.058 "nvme_admin": false, 00:04:16.058 "nvme_io": false, 00:04:16.058 "nvme_io_md": false, 00:04:16.058 "write_zeroes": true, 00:04:16.058 "zcopy": true, 00:04:16.058 "get_zone_info": false, 00:04:16.058 "zone_management": false, 00:04:16.058 "zone_append": false, 00:04:16.058 "compare": false, 00:04:16.058 "compare_and_write": false, 00:04:16.058 "abort": true, 00:04:16.058 "seek_hole": false, 00:04:16.058 "seek_data": false, 00:04:16.058 "copy": true, 00:04:16.058 "nvme_iov_md": false 00:04:16.058 }, 00:04:16.058 "memory_domains": [ 00:04:16.058 { 00:04:16.058 "dma_device_id": "system", 00:04:16.058 "dma_device_type": 1 00:04:16.058 }, 00:04:16.058 { 00:04:16.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.058 "dma_device_type": 2 00:04:16.058 } 00:04:16.058 ], 00:04:16.058 "driver_specific": {} 00:04:16.058 } 00:04:16.058 ]' 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.058 [2024-10-25 17:45:34.377552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.058 [2024-10-25 17:45:34.377629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.058 [2024-10-25 17:45:34.377650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:16.058 [2024-10-25 17:45:34.377680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.058 [2024-10-25 17:45:34.379948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.058 [2024-10-25 17:45:34.379989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.058 Passthru0 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.058 { 00:04:16.058 "name": "Malloc0", 00:04:16.058 "aliases": [ 00:04:16.058 "f6e57110-49de-48e5-8b10-e2548bf4f0ba" 00:04:16.058 ], 00:04:16.058 "product_name": "Malloc disk", 00:04:16.058 "block_size": 512, 00:04:16.058 "num_blocks": 16384, 00:04:16.058 "uuid": "f6e57110-49de-48e5-8b10-e2548bf4f0ba", 00:04:16.058 "assigned_rate_limits": { 00:04:16.058 "rw_ios_per_sec": 0, 00:04:16.058 "rw_mbytes_per_sec": 0, 00:04:16.058 "r_mbytes_per_sec": 0, 00:04:16.058 "w_mbytes_per_sec": 0 00:04:16.058 }, 00:04:16.058 "claimed": true, 00:04:16.058 "claim_type": "exclusive_write", 00:04:16.058 "zoned": false, 00:04:16.058 "supported_io_types": { 00:04:16.058 "read": true, 00:04:16.058 "write": true, 00:04:16.058 "unmap": true, 00:04:16.058 "flush": true, 00:04:16.058 "reset": true, 00:04:16.058 "nvme_admin": false, 00:04:16.058 "nvme_io": false, 00:04:16.058 "nvme_io_md": false, 00:04:16.058 "write_zeroes": true, 00:04:16.058 "zcopy": true, 00:04:16.058 "get_zone_info": false, 00:04:16.058 "zone_management": false, 00:04:16.058 "zone_append": false, 00:04:16.058 "compare": false, 00:04:16.058 "compare_and_write": false, 00:04:16.058 "abort": true, 00:04:16.058 "seek_hole": false, 00:04:16.058 "seek_data": false, 00:04:16.058 "copy": true, 00:04:16.058 "nvme_iov_md": false 00:04:16.058 }, 00:04:16.058 "memory_domains": [ 00:04:16.058 { 00:04:16.058 "dma_device_id": "system", 00:04:16.058 "dma_device_type": 1 00:04:16.058 }, 00:04:16.058 { 00:04:16.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.058 "dma_device_type": 2 00:04:16.058 } 00:04:16.058 ], 00:04:16.058 "driver_specific": {} 00:04:16.058 }, 00:04:16.058 { 00:04:16.058 "name": "Passthru0", 00:04:16.058 "aliases": [ 00:04:16.058 "b8d07c12-3cca-5193-a6ad-b5ed44624d4c" 00:04:16.058 ], 00:04:16.058 "product_name": "passthru", 00:04:16.058 "block_size": 512, 00:04:16.058 "num_blocks": 16384, 00:04:16.058 "uuid": "b8d07c12-3cca-5193-a6ad-b5ed44624d4c", 00:04:16.058 "assigned_rate_limits": { 00:04:16.058 "rw_ios_per_sec": 0, 00:04:16.058 "rw_mbytes_per_sec": 0, 00:04:16.058 "r_mbytes_per_sec": 0, 00:04:16.058 "w_mbytes_per_sec": 0 00:04:16.058 }, 00:04:16.058 "claimed": false, 00:04:16.058 "zoned": false, 00:04:16.058 "supported_io_types": { 00:04:16.058 "read": true, 00:04:16.058 "write": true, 00:04:16.058 "unmap": true, 00:04:16.058 "flush": true, 00:04:16.058 "reset": true, 00:04:16.058 "nvme_admin": false, 00:04:16.058 "nvme_io": false, 00:04:16.058 "nvme_io_md": false, 00:04:16.058 "write_zeroes": true, 00:04:16.058 "zcopy": true, 00:04:16.058 "get_zone_info": false, 00:04:16.058 "zone_management": false, 00:04:16.058 "zone_append": false, 00:04:16.058 "compare": false, 00:04:16.058 "compare_and_write": false, 00:04:16.058 "abort": true, 00:04:16.058 "seek_hole": false, 00:04:16.058 "seek_data": false, 00:04:16.058 "copy": true, 00:04:16.058 "nvme_iov_md": false 00:04:16.058 }, 00:04:16.058 "memory_domains": [ 00:04:16.058 { 00:04:16.058 "dma_device_id": "system", 00:04:16.058 "dma_device_type": 1 00:04:16.058 }, 00:04:16.058 { 00:04:16.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.058 "dma_device_type": 2 00:04:16.058 } 00:04:16.058 ], 00:04:16.058 "driver_specific": { 00:04:16.058 "passthru": { 00:04:16.058 "name": "Passthru0", 00:04:16.058 "base_bdev_name": "Malloc0" 00:04:16.058 } 00:04:16.058 } 00:04:16.058 } 00:04:16.058 ]' 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.058 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.058 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.318 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.318 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.318 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.318 17:45:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.318 00:04:16.318 real 0m0.356s 00:04:16.318 user 0m0.186s 00:04:16.318 sys 0m0.060s 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.318 17:45:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 ************************************ 00:04:16.318 END TEST rpc_integrity 00:04:16.318 ************************************ 00:04:16.318 17:45:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.318 17:45:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.318 17:45:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.318 17:45:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 ************************************ 00:04:16.318 START TEST rpc_plugins 00:04:16.318 ************************************ 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:16.318 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.318 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.318 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.318 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.318 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.319 { 00:04:16.319 "name": "Malloc1", 00:04:16.319 "aliases": [ 00:04:16.319 "dbc6f363-55ec-4594-8208-56e91aa589b5" 00:04:16.319 ], 00:04:16.319 "product_name": "Malloc disk", 00:04:16.319 "block_size": 4096, 00:04:16.319 "num_blocks": 256, 00:04:16.319 "uuid": "dbc6f363-55ec-4594-8208-56e91aa589b5", 00:04:16.319 "assigned_rate_limits": { 00:04:16.319 "rw_ios_per_sec": 0, 00:04:16.319 "rw_mbytes_per_sec": 0, 00:04:16.319 "r_mbytes_per_sec": 0, 00:04:16.319 "w_mbytes_per_sec": 0 00:04:16.319 }, 00:04:16.319 "claimed": false, 00:04:16.319 "zoned": false, 00:04:16.319 "supported_io_types": { 00:04:16.319 "read": true, 00:04:16.319 "write": true, 00:04:16.319 "unmap": true, 00:04:16.319 "flush": true, 00:04:16.319 "reset": true, 00:04:16.319 "nvme_admin": false, 00:04:16.319 "nvme_io": false, 00:04:16.319 "nvme_io_md": false, 00:04:16.319 "write_zeroes": true, 00:04:16.319 "zcopy": true, 00:04:16.319 "get_zone_info": false, 00:04:16.319 "zone_management": false, 00:04:16.319 "zone_append": false, 00:04:16.319 "compare": false, 00:04:16.319 "compare_and_write": false, 00:04:16.319 "abort": true, 00:04:16.319 "seek_hole": false, 00:04:16.319 "seek_data": false, 00:04:16.319 "copy": true, 00:04:16.319 "nvme_iov_md": false 00:04:16.319 }, 00:04:16.319 "memory_domains": [ 00:04:16.319 { 00:04:16.319 "dma_device_id": "system", 00:04:16.319 "dma_device_type": 1 00:04:16.319 }, 00:04:16.319 { 00:04:16.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.319 "dma_device_type": 2 00:04:16.319 } 00:04:16.319 ], 00:04:16.319 "driver_specific": {} 00:04:16.319 } 00:04:16.319 ]' 00:04:16.319 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.319 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.319 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.319 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.319 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.579 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.579 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.579 17:45:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.579 00:04:16.579 real 0m0.168s 00:04:16.579 user 0m0.102s 00:04:16.579 sys 0m0.023s 00:04:16.579 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.579 17:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.579 ************************************ 00:04:16.579 END TEST rpc_plugins 00:04:16.579 ************************************ 00:04:16.579 17:45:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.579 17:45:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.579 17:45:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.579 17:45:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.579 ************************************ 00:04:16.579 START TEST rpc_trace_cmd_test 00:04:16.579 ************************************ 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.579 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56831", 00:04:16.579 "tpoint_group_mask": "0x8", 00:04:16.579 "iscsi_conn": { 00:04:16.579 "mask": "0x2", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "scsi": { 00:04:16.579 "mask": "0x4", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "bdev": { 00:04:16.579 "mask": "0x8", 00:04:16.579 "tpoint_mask": "0xffffffffffffffff" 00:04:16.579 }, 00:04:16.579 "nvmf_rdma": { 00:04:16.579 "mask": "0x10", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "nvmf_tcp": { 00:04:16.579 "mask": "0x20", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "ftl": { 00:04:16.579 "mask": "0x40", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "blobfs": { 00:04:16.579 "mask": "0x80", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "dsa": { 00:04:16.579 "mask": "0x200", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "thread": { 00:04:16.579 "mask": "0x400", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "nvme_pcie": { 00:04:16.579 "mask": "0x800", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "iaa": { 00:04:16.579 "mask": "0x1000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "nvme_tcp": { 00:04:16.579 "mask": "0x2000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "bdev_nvme": { 00:04:16.579 "mask": "0x4000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "sock": { 00:04:16.579 "mask": "0x8000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "blob": { 00:04:16.579 "mask": "0x10000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "bdev_raid": { 00:04:16.579 "mask": "0x20000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 }, 00:04:16.579 "scheduler": { 00:04:16.579 "mask": "0x40000", 00:04:16.579 "tpoint_mask": "0x0" 00:04:16.579 } 00:04:16.579 }' 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.579 17:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.839 00:04:16.839 real 0m0.251s 00:04:16.839 user 0m0.197s 00:04:16.839 sys 0m0.046s 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.839 17:45:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 ************************************ 00:04:16.839 END TEST rpc_trace_cmd_test 00:04:16.839 ************************************ 00:04:16.839 17:45:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.839 17:45:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.839 17:45:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.839 17:45:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.839 17:45:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.839 17:45:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 ************************************ 00:04:16.839 START TEST rpc_daemon_integrity 00:04:16.839 ************************************ 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.839 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.100 { 00:04:17.100 "name": "Malloc2", 00:04:17.100 "aliases": [ 00:04:17.100 "6f1a9c8a-b7da-4bca-af70-b38794c91b51" 00:04:17.100 ], 00:04:17.100 "product_name": "Malloc disk", 00:04:17.100 "block_size": 512, 00:04:17.100 "num_blocks": 16384, 00:04:17.100 "uuid": "6f1a9c8a-b7da-4bca-af70-b38794c91b51", 00:04:17.100 "assigned_rate_limits": { 00:04:17.100 "rw_ios_per_sec": 0, 00:04:17.100 "rw_mbytes_per_sec": 0, 00:04:17.100 "r_mbytes_per_sec": 0, 00:04:17.100 "w_mbytes_per_sec": 0 00:04:17.100 }, 00:04:17.100 "claimed": false, 00:04:17.100 "zoned": false, 00:04:17.100 "supported_io_types": { 00:04:17.100 "read": true, 00:04:17.100 "write": true, 00:04:17.100 "unmap": true, 00:04:17.100 "flush": true, 00:04:17.100 "reset": true, 00:04:17.100 "nvme_admin": false, 00:04:17.100 "nvme_io": false, 00:04:17.100 "nvme_io_md": false, 00:04:17.100 "write_zeroes": true, 00:04:17.100 "zcopy": true, 00:04:17.100 "get_zone_info": false, 00:04:17.100 "zone_management": false, 00:04:17.100 "zone_append": false, 00:04:17.100 "compare": false, 00:04:17.100 "compare_and_write": false, 00:04:17.100 "abort": true, 00:04:17.100 "seek_hole": false, 00:04:17.100 "seek_data": false, 00:04:17.100 "copy": true, 00:04:17.100 "nvme_iov_md": false 00:04:17.100 }, 00:04:17.100 "memory_domains": [ 00:04:17.100 { 00:04:17.100 "dma_device_id": "system", 00:04:17.100 "dma_device_type": 1 00:04:17.100 }, 00:04:17.100 { 00:04:17.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.100 "dma_device_type": 2 00:04:17.100 } 00:04:17.100 ], 00:04:17.100 "driver_specific": {} 00:04:17.100 } 00:04:17.100 ]' 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.100 [2024-10-25 17:45:35.347184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.100 [2024-10-25 17:45:35.347259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.100 [2024-10-25 17:45:35.347278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:17.100 [2024-10-25 17:45:35.347288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.100 [2024-10-25 17:45:35.349420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.100 [2024-10-25 17:45:35.349457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.100 Passthru0 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.100 { 00:04:17.100 "name": "Malloc2", 00:04:17.100 "aliases": [ 00:04:17.100 "6f1a9c8a-b7da-4bca-af70-b38794c91b51" 00:04:17.100 ], 00:04:17.100 "product_name": "Malloc disk", 00:04:17.100 "block_size": 512, 00:04:17.100 "num_blocks": 16384, 00:04:17.100 "uuid": "6f1a9c8a-b7da-4bca-af70-b38794c91b51", 00:04:17.100 "assigned_rate_limits": { 00:04:17.100 "rw_ios_per_sec": 0, 00:04:17.100 "rw_mbytes_per_sec": 0, 00:04:17.100 "r_mbytes_per_sec": 0, 00:04:17.100 "w_mbytes_per_sec": 0 00:04:17.100 }, 00:04:17.100 "claimed": true, 00:04:17.100 "claim_type": "exclusive_write", 00:04:17.100 "zoned": false, 00:04:17.100 "supported_io_types": { 00:04:17.100 "read": true, 00:04:17.100 "write": true, 00:04:17.100 "unmap": true, 00:04:17.100 "flush": true, 00:04:17.100 "reset": true, 00:04:17.100 "nvme_admin": false, 00:04:17.100 "nvme_io": false, 00:04:17.100 "nvme_io_md": false, 00:04:17.100 "write_zeroes": true, 00:04:17.100 "zcopy": true, 00:04:17.100 "get_zone_info": false, 00:04:17.100 "zone_management": false, 00:04:17.100 "zone_append": false, 00:04:17.100 "compare": false, 00:04:17.100 "compare_and_write": false, 00:04:17.100 "abort": true, 00:04:17.100 "seek_hole": false, 00:04:17.100 "seek_data": false, 00:04:17.100 "copy": true, 00:04:17.100 "nvme_iov_md": false 00:04:17.100 }, 00:04:17.100 "memory_domains": [ 00:04:17.100 { 00:04:17.100 "dma_device_id": "system", 00:04:17.100 "dma_device_type": 1 00:04:17.100 }, 00:04:17.100 { 00:04:17.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.100 "dma_device_type": 2 00:04:17.100 } 00:04:17.100 ], 00:04:17.100 "driver_specific": {} 00:04:17.100 }, 00:04:17.100 { 00:04:17.100 "name": "Passthru0", 00:04:17.100 "aliases": [ 00:04:17.100 "062716cb-2737-5b09-9036-df2b9dd5945a" 00:04:17.100 ], 00:04:17.100 "product_name": "passthru", 00:04:17.100 "block_size": 512, 00:04:17.100 "num_blocks": 16384, 00:04:17.100 "uuid": "062716cb-2737-5b09-9036-df2b9dd5945a", 00:04:17.100 "assigned_rate_limits": { 00:04:17.100 "rw_ios_per_sec": 0, 00:04:17.100 "rw_mbytes_per_sec": 0, 00:04:17.100 "r_mbytes_per_sec": 0, 00:04:17.100 "w_mbytes_per_sec": 0 00:04:17.100 }, 00:04:17.100 "claimed": false, 00:04:17.100 "zoned": false, 00:04:17.100 "supported_io_types": { 00:04:17.100 "read": true, 00:04:17.100 "write": true, 00:04:17.100 "unmap": true, 00:04:17.100 "flush": true, 00:04:17.100 "reset": true, 00:04:17.100 "nvme_admin": false, 00:04:17.100 "nvme_io": false, 00:04:17.100 "nvme_io_md": false, 00:04:17.100 "write_zeroes": true, 00:04:17.100 "zcopy": true, 00:04:17.100 "get_zone_info": false, 00:04:17.100 "zone_management": false, 00:04:17.100 "zone_append": false, 00:04:17.100 "compare": false, 00:04:17.100 "compare_and_write": false, 00:04:17.100 "abort": true, 00:04:17.100 "seek_hole": false, 00:04:17.100 "seek_data": false, 00:04:17.100 "copy": true, 00:04:17.100 "nvme_iov_md": false 00:04:17.100 }, 00:04:17.100 "memory_domains": [ 00:04:17.100 { 00:04:17.100 "dma_device_id": "system", 00:04:17.100 "dma_device_type": 1 00:04:17.100 }, 00:04:17.100 { 00:04:17.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.100 "dma_device_type": 2 00:04:17.100 } 00:04:17.100 ], 00:04:17.100 "driver_specific": { 00:04:17.100 "passthru": { 00:04:17.100 "name": "Passthru0", 00:04:17.100 "base_bdev_name": "Malloc2" 00:04:17.100 } 00:04:17.100 } 00:04:17.100 } 00:04:17.100 ]' 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.100 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.101 00:04:17.101 real 0m0.344s 00:04:17.101 user 0m0.189s 00:04:17.101 sys 0m0.051s 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.101 17:45:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.101 ************************************ 00:04:17.101 END TEST rpc_daemon_integrity 00:04:17.101 ************************************ 00:04:17.360 17:45:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.360 17:45:35 rpc -- rpc/rpc.sh@84 -- # killprocess 56831 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@950 -- # '[' -z 56831 ']' 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@954 -- # kill -0 56831 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@955 -- # uname 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56831 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.360 killing process with pid 56831 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56831' 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@969 -- # kill 56831 00:04:17.360 17:45:35 rpc -- common/autotest_common.sh@974 -- # wait 56831 00:04:19.903 00:04:19.903 real 0m5.089s 00:04:19.903 user 0m5.569s 00:04:19.903 sys 0m0.956s 00:04:19.903 17:45:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.903 17:45:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.903 ************************************ 00:04:19.903 END TEST rpc 00:04:19.903 ************************************ 00:04:19.903 17:45:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.903 17:45:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.903 17:45:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.903 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.903 ************************************ 00:04:19.903 START TEST skip_rpc 00:04:19.903 ************************************ 00:04:19.903 17:45:37 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.903 * Looking for test storage... 00:04:19.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.903 17:45:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.903 --rc genhtml_branch_coverage=1 00:04:19.903 --rc genhtml_function_coverage=1 00:04:19.903 --rc genhtml_legend=1 00:04:19.903 --rc geninfo_all_blocks=1 00:04:19.903 --rc geninfo_unexecuted_blocks=1 00:04:19.903 00:04:19.903 ' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.903 --rc genhtml_branch_coverage=1 00:04:19.903 --rc genhtml_function_coverage=1 00:04:19.903 --rc genhtml_legend=1 00:04:19.903 --rc geninfo_all_blocks=1 00:04:19.903 --rc geninfo_unexecuted_blocks=1 00:04:19.903 00:04:19.903 ' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.903 --rc genhtml_branch_coverage=1 00:04:19.903 --rc genhtml_function_coverage=1 00:04:19.903 --rc genhtml_legend=1 00:04:19.903 --rc geninfo_all_blocks=1 00:04:19.903 --rc geninfo_unexecuted_blocks=1 00:04:19.903 00:04:19.903 ' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:19.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.903 --rc genhtml_branch_coverage=1 00:04:19.903 --rc genhtml_function_coverage=1 00:04:19.903 --rc genhtml_legend=1 00:04:19.903 --rc geninfo_all_blocks=1 00:04:19.903 --rc geninfo_unexecuted_blocks=1 00:04:19.903 00:04:19.903 ' 00:04:19.903 17:45:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.903 17:45:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.903 17:45:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.903 17:45:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.903 ************************************ 00:04:19.903 START TEST skip_rpc 00:04:19.903 ************************************ 00:04:19.903 17:45:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:19.903 17:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57060 00:04:19.903 17:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.903 17:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.903 17:45:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.903 [2024-10-25 17:45:38.264174] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:19.903 [2024-10-25 17:45:38.264295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:04:20.165 [2024-10-25 17:45:38.440549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.165 [2024-10-25 17:45:38.551637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57060 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57060 ']' 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57060 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57060 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57060' 00:04:25.458 killing process with pid 57060 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57060 00:04:25.458 17:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57060 00:04:27.369 00:04:27.369 real 0m7.298s 00:04:27.369 user 0m6.818s 00:04:27.369 sys 0m0.401s 00:04:27.369 17:45:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.370 17:45:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.370 ************************************ 00:04:27.370 END TEST skip_rpc 00:04:27.370 ************************************ 00:04:27.370 17:45:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.370 17:45:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.370 17:45:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.370 17:45:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.370 ************************************ 00:04:27.370 START TEST skip_rpc_with_json 00:04:27.370 ************************************ 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57164 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57164 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57164 ']' 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.370 17:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.370 [2024-10-25 17:45:45.632930] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:27.370 [2024-10-25 17:45:45.633131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57164 ] 00:04:27.630 [2024-10-25 17:45:45.809400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.630 [2024-10-25 17:45:45.916393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.569 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.569 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:28.569 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.569 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.569 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.569 [2024-10-25 17:45:46.696791] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.569 request: 00:04:28.569 { 00:04:28.569 "trtype": "tcp", 00:04:28.569 "method": "nvmf_get_transports", 00:04:28.569 "req_id": 1 00:04:28.569 } 00:04:28.569 Got JSON-RPC error response 00:04:28.569 response: 00:04:28.569 { 00:04:28.569 "code": -19, 00:04:28.569 "message": "No such device" 00:04:28.570 } 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.570 [2024-10-25 17:45:46.708934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.570 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.570 { 00:04:28.570 "subsystems": [ 00:04:28.570 { 00:04:28.570 "subsystem": "fsdev", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "fsdev_set_opts", 00:04:28.570 "params": { 00:04:28.570 "fsdev_io_pool_size": 65535, 00:04:28.570 "fsdev_io_cache_size": 256 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "keyring", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "iobuf", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "iobuf_set_options", 00:04:28.570 "params": { 00:04:28.570 "small_pool_count": 8192, 00:04:28.570 "large_pool_count": 1024, 00:04:28.570 "small_bufsize": 8192, 00:04:28.570 "large_bufsize": 135168, 00:04:28.570 "enable_numa": false 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "sock", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "sock_set_default_impl", 00:04:28.570 "params": { 00:04:28.570 "impl_name": "posix" 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "sock_impl_set_options", 00:04:28.570 "params": { 00:04:28.570 "impl_name": "ssl", 00:04:28.570 "recv_buf_size": 4096, 00:04:28.570 "send_buf_size": 4096, 00:04:28.570 "enable_recv_pipe": true, 00:04:28.570 "enable_quickack": false, 00:04:28.570 "enable_placement_id": 0, 00:04:28.570 "enable_zerocopy_send_server": true, 00:04:28.570 "enable_zerocopy_send_client": false, 00:04:28.570 "zerocopy_threshold": 0, 00:04:28.570 "tls_version": 0, 00:04:28.570 "enable_ktls": false 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "sock_impl_set_options", 00:04:28.570 "params": { 00:04:28.570 "impl_name": "posix", 00:04:28.570 "recv_buf_size": 2097152, 00:04:28.570 "send_buf_size": 2097152, 00:04:28.570 "enable_recv_pipe": true, 00:04:28.570 "enable_quickack": false, 00:04:28.570 "enable_placement_id": 0, 00:04:28.570 "enable_zerocopy_send_server": true, 00:04:28.570 "enable_zerocopy_send_client": false, 00:04:28.570 "zerocopy_threshold": 0, 00:04:28.570 "tls_version": 0, 00:04:28.570 "enable_ktls": false 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "vmd", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "accel", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "accel_set_options", 00:04:28.570 "params": { 00:04:28.570 "small_cache_size": 128, 00:04:28.570 "large_cache_size": 16, 00:04:28.570 "task_count": 2048, 00:04:28.570 "sequence_count": 2048, 00:04:28.570 "buf_count": 2048 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "bdev", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "bdev_set_options", 00:04:28.570 "params": { 00:04:28.570 "bdev_io_pool_size": 65535, 00:04:28.570 "bdev_io_cache_size": 256, 00:04:28.570 "bdev_auto_examine": true, 00:04:28.570 "iobuf_small_cache_size": 128, 00:04:28.570 "iobuf_large_cache_size": 16 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "bdev_raid_set_options", 00:04:28.570 "params": { 00:04:28.570 "process_window_size_kb": 1024, 00:04:28.570 "process_max_bandwidth_mb_sec": 0 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "bdev_iscsi_set_options", 00:04:28.570 "params": { 00:04:28.570 "timeout_sec": 30 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "bdev_nvme_set_options", 00:04:28.570 "params": { 00:04:28.570 "action_on_timeout": "none", 00:04:28.570 "timeout_us": 0, 00:04:28.570 "timeout_admin_us": 0, 00:04:28.570 "keep_alive_timeout_ms": 10000, 00:04:28.570 "arbitration_burst": 0, 00:04:28.570 "low_priority_weight": 0, 00:04:28.570 "medium_priority_weight": 0, 00:04:28.570 "high_priority_weight": 0, 00:04:28.570 "nvme_adminq_poll_period_us": 10000, 00:04:28.570 "nvme_ioq_poll_period_us": 0, 00:04:28.570 "io_queue_requests": 0, 00:04:28.570 "delay_cmd_submit": true, 00:04:28.570 "transport_retry_count": 4, 00:04:28.570 "bdev_retry_count": 3, 00:04:28.570 "transport_ack_timeout": 0, 00:04:28.570 "ctrlr_loss_timeout_sec": 0, 00:04:28.570 "reconnect_delay_sec": 0, 00:04:28.570 "fast_io_fail_timeout_sec": 0, 00:04:28.570 "disable_auto_failback": false, 00:04:28.570 "generate_uuids": false, 00:04:28.570 "transport_tos": 0, 00:04:28.570 "nvme_error_stat": false, 00:04:28.570 "rdma_srq_size": 0, 00:04:28.570 "io_path_stat": false, 00:04:28.570 "allow_accel_sequence": false, 00:04:28.570 "rdma_max_cq_size": 0, 00:04:28.570 "rdma_cm_event_timeout_ms": 0, 00:04:28.570 "dhchap_digests": [ 00:04:28.570 "sha256", 00:04:28.570 "sha384", 00:04:28.570 "sha512" 00:04:28.570 ], 00:04:28.570 "dhchap_dhgroups": [ 00:04:28.570 "null", 00:04:28.570 "ffdhe2048", 00:04:28.570 "ffdhe3072", 00:04:28.570 "ffdhe4096", 00:04:28.570 "ffdhe6144", 00:04:28.570 "ffdhe8192" 00:04:28.570 ] 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "bdev_nvme_set_hotplug", 00:04:28.570 "params": { 00:04:28.570 "period_us": 100000, 00:04:28.570 "enable": false 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "bdev_wait_for_examine" 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "scsi", 00:04:28.570 "config": null 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "scheduler", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "framework_set_scheduler", 00:04:28.570 "params": { 00:04:28.570 "name": "static" 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "vhost_scsi", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "vhost_blk", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "ublk", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "nbd", 00:04:28.570 "config": [] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "nvmf", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.570 "method": "nvmf_set_config", 00:04:28.570 "params": { 00:04:28.570 "discovery_filter": "match_any", 00:04:28.570 "admin_cmd_passthru": { 00:04:28.570 "identify_ctrlr": false 00:04:28.570 }, 00:04:28.570 "dhchap_digests": [ 00:04:28.570 "sha256", 00:04:28.570 "sha384", 00:04:28.570 "sha512" 00:04:28.570 ], 00:04:28.570 "dhchap_dhgroups": [ 00:04:28.570 "null", 00:04:28.570 "ffdhe2048", 00:04:28.570 "ffdhe3072", 00:04:28.570 "ffdhe4096", 00:04:28.570 "ffdhe6144", 00:04:28.570 "ffdhe8192" 00:04:28.570 ] 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "nvmf_set_max_subsystems", 00:04:28.570 "params": { 00:04:28.570 "max_subsystems": 1024 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "nvmf_set_crdt", 00:04:28.570 "params": { 00:04:28.570 "crdt1": 0, 00:04:28.570 "crdt2": 0, 00:04:28.570 "crdt3": 0 00:04:28.570 } 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "method": "nvmf_create_transport", 00:04:28.570 "params": { 00:04:28.570 "trtype": "TCP", 00:04:28.570 "max_queue_depth": 128, 00:04:28.570 "max_io_qpairs_per_ctrlr": 127, 00:04:28.570 "in_capsule_data_size": 4096, 00:04:28.570 "max_io_size": 131072, 00:04:28.570 "io_unit_size": 131072, 00:04:28.570 "max_aq_depth": 128, 00:04:28.570 "num_shared_buffers": 511, 00:04:28.570 "buf_cache_size": 4294967295, 00:04:28.570 "dif_insert_or_strip": false, 00:04:28.570 "zcopy": false, 00:04:28.570 "c2h_success": true, 00:04:28.570 "sock_priority": 0, 00:04:28.570 "abort_timeout_sec": 1, 00:04:28.570 "ack_timeout": 0, 00:04:28.570 "data_wr_pool_size": 0 00:04:28.570 } 00:04:28.570 } 00:04:28.570 ] 00:04:28.570 }, 00:04:28.570 { 00:04:28.570 "subsystem": "iscsi", 00:04:28.570 "config": [ 00:04:28.570 { 00:04:28.571 "method": "iscsi_set_options", 00:04:28.571 "params": { 00:04:28.571 "node_base": "iqn.2016-06.io.spdk", 00:04:28.571 "max_sessions": 128, 00:04:28.571 "max_connections_per_session": 2, 00:04:28.571 "max_queue_depth": 64, 00:04:28.571 "default_time2wait": 2, 00:04:28.571 "default_time2retain": 20, 00:04:28.571 "first_burst_length": 8192, 00:04:28.571 "immediate_data": true, 00:04:28.571 "allow_duplicated_isid": false, 00:04:28.571 "error_recovery_level": 0, 00:04:28.571 "nop_timeout": 60, 00:04:28.571 "nop_in_interval": 30, 00:04:28.571 "disable_chap": false, 00:04:28.571 "require_chap": false, 00:04:28.571 "mutual_chap": false, 00:04:28.571 "chap_group": 0, 00:04:28.571 "max_large_datain_per_connection": 64, 00:04:28.571 "max_r2t_per_connection": 4, 00:04:28.571 "pdu_pool_size": 36864, 00:04:28.571 "immediate_data_pool_size": 16384, 00:04:28.571 "data_out_pool_size": 2048 00:04:28.571 } 00:04:28.571 } 00:04:28.571 ] 00:04:28.571 } 00:04:28.571 ] 00:04:28.571 } 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57164 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57164 ']' 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57164 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57164 00:04:28.571 killing process with pid 57164 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57164' 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57164 00:04:28.571 17:45:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57164 00:04:31.110 17:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57209 00:04:31.110 17:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.110 17:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57209 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57209 ']' 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57209 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57209 00:04:36.407 killing process with pid 57209 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57209' 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57209 00:04:36.407 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57209 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.317 00:04:38.317 real 0m10.891s 00:04:38.317 user 0m10.316s 00:04:38.317 sys 0m0.854s 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 ************************************ 00:04:38.317 END TEST skip_rpc_with_json 00:04:38.317 ************************************ 00:04:38.317 17:45:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 ************************************ 00:04:38.317 START TEST skip_rpc_with_delay 00:04:38.317 ************************************ 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.317 [2024-10-25 17:45:56.596701] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.317 ************************************ 00:04:38.317 END TEST skip_rpc_with_delay 00:04:38.317 ************************************ 00:04:38.317 00:04:38.317 real 0m0.173s 00:04:38.317 user 0m0.101s 00:04:38.317 sys 0m0.070s 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.317 17:45:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 17:45:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.317 17:45:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.317 17:45:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.317 17:45:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 ************************************ 00:04:38.317 START TEST exit_on_failed_rpc_init 00:04:38.317 ************************************ 00:04:38.317 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:38.317 17:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57348 00:04:38.317 17:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57348 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57348 ']' 00:04:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.318 17:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.578 [2024-10-25 17:45:56.820022] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:38.578 [2024-10-25 17:45:56.820236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57348 ] 00:04:38.578 [2024-10-25 17:45:56.980968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.838 [2024-10-25 17:45:57.088752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.778 17:45:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.778 [2024-10-25 17:45:57.999051] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:39.778 [2024-10-25 17:45:57.999247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:04:39.778 [2024-10-25 17:45:58.170942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.038 [2024-10-25 17:45:58.284512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.038 [2024-10-25 17:45:58.284729] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.038 [2024-10-25 17:45:58.284784] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.038 [2024-10-25 17:45:58.284813] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57348 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57348 ']' 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57348 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57348 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57348' 00:04:40.298 killing process with pid 57348 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57348 00:04:40.298 17:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57348 00:04:42.860 00:04:42.860 real 0m4.062s 00:04:42.860 user 0m4.359s 00:04:42.860 sys 0m0.567s 00:04:42.860 ************************************ 00:04:42.860 END TEST exit_on_failed_rpc_init 00:04:42.860 ************************************ 00:04:42.860 17:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.860 17:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.860 17:46:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.860 00:04:42.860 real 0m22.932s 00:04:42.860 user 0m21.804s 00:04:42.860 sys 0m2.208s 00:04:42.860 17:46:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.860 17:46:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.861 ************************************ 00:04:42.861 END TEST skip_rpc 00:04:42.861 ************************************ 00:04:42.861 17:46:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.861 17:46:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.861 17:46:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.861 17:46:00 -- common/autotest_common.sh@10 -- # set +x 00:04:42.861 ************************************ 00:04:42.861 START TEST rpc_client 00:04:42.861 ************************************ 00:04:42.861 17:46:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.861 * Looking for test storage... 00:04:42.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.861 17:46:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.861 --rc genhtml_branch_coverage=1 00:04:42.861 --rc genhtml_function_coverage=1 00:04:42.861 --rc genhtml_legend=1 00:04:42.861 --rc geninfo_all_blocks=1 00:04:42.861 --rc geninfo_unexecuted_blocks=1 00:04:42.861 00:04:42.861 ' 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.861 --rc genhtml_branch_coverage=1 00:04:42.861 --rc genhtml_function_coverage=1 00:04:42.861 --rc genhtml_legend=1 00:04:42.861 --rc geninfo_all_blocks=1 00:04:42.861 --rc geninfo_unexecuted_blocks=1 00:04:42.861 00:04:42.861 ' 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.861 --rc genhtml_branch_coverage=1 00:04:42.861 --rc genhtml_function_coverage=1 00:04:42.861 --rc genhtml_legend=1 00:04:42.861 --rc geninfo_all_blocks=1 00:04:42.861 --rc geninfo_unexecuted_blocks=1 00:04:42.861 00:04:42.861 ' 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.861 --rc genhtml_branch_coverage=1 00:04:42.861 --rc genhtml_function_coverage=1 00:04:42.861 --rc genhtml_legend=1 00:04:42.861 --rc geninfo_all_blocks=1 00:04:42.861 --rc geninfo_unexecuted_blocks=1 00:04:42.861 00:04:42.861 ' 00:04:42.861 17:46:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.861 OK 00:04:42.861 17:46:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.861 00:04:42.861 real 0m0.290s 00:04:42.861 user 0m0.152s 00:04:42.861 sys 0m0.151s 00:04:42.861 ************************************ 00:04:42.861 END TEST rpc_client 00:04:42.861 ************************************ 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.861 17:46:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.861 17:46:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.861 17:46:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.861 17:46:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.861 17:46:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.861 ************************************ 00:04:42.861 START TEST json_config 00:04:42.861 ************************************ 00:04:42.861 17:46:01 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.122 17:46:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.122 17:46:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.122 17:46:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.122 17:46:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.122 17:46:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.122 17:46:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:43.122 17:46:01 json_config -- scripts/common.sh@345 -- # : 1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.122 17:46:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.122 17:46:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@353 -- # local d=1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.122 17:46:01 json_config -- scripts/common.sh@355 -- # echo 1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.122 17:46:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@353 -- # local d=2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.122 17:46:01 json_config -- scripts/common.sh@355 -- # echo 2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.122 17:46:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.122 17:46:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.122 17:46:01 json_config -- scripts/common.sh@368 -- # return 0 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.122 --rc genhtml_branch_coverage=1 00:04:43.122 --rc genhtml_function_coverage=1 00:04:43.122 --rc genhtml_legend=1 00:04:43.122 --rc geninfo_all_blocks=1 00:04:43.122 --rc geninfo_unexecuted_blocks=1 00:04:43.122 00:04:43.122 ' 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.122 --rc genhtml_branch_coverage=1 00:04:43.122 --rc genhtml_function_coverage=1 00:04:43.122 --rc genhtml_legend=1 00:04:43.122 --rc geninfo_all_blocks=1 00:04:43.122 --rc geninfo_unexecuted_blocks=1 00:04:43.122 00:04:43.122 ' 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.122 --rc genhtml_branch_coverage=1 00:04:43.122 --rc genhtml_function_coverage=1 00:04:43.122 --rc genhtml_legend=1 00:04:43.122 --rc geninfo_all_blocks=1 00:04:43.122 --rc geninfo_unexecuted_blocks=1 00:04:43.122 00:04:43.122 ' 00:04:43.122 17:46:01 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.122 --rc genhtml_branch_coverage=1 00:04:43.122 --rc genhtml_function_coverage=1 00:04:43.122 --rc genhtml_legend=1 00:04:43.122 --rc geninfo_all_blocks=1 00:04:43.122 --rc geninfo_unexecuted_blocks=1 00:04:43.122 00:04:43.122 ' 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:991536d8-8d7e-47ec-ad25-340c17aae998 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=991536d8-8d7e-47ec-ad25-340c17aae998 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.123 17:46:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.123 17:46:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.123 17:46:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.123 17:46:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.123 17:46:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.123 17:46:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.123 17:46:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.123 17:46:01 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.123 17:46:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@51 -- # : 0 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.123 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.123 17:46:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.123 WARNING: No tests are enabled so not running JSON configuration tests 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:43.123 17:46:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:43.123 ************************************ 00:04:43.123 END TEST json_config 00:04:43.123 ************************************ 00:04:43.123 00:04:43.123 real 0m0.229s 00:04:43.123 user 0m0.136s 00:04:43.123 sys 0m0.097s 00:04:43.123 17:46:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.123 17:46:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.384 17:46:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.384 17:46:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.384 17:46:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.384 17:46:01 -- common/autotest_common.sh@10 -- # set +x 00:04:43.384 ************************************ 00:04:43.384 START TEST json_config_extra_key 00:04:43.384 ************************************ 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:43.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.384 --rc genhtml_branch_coverage=1 00:04:43.384 --rc genhtml_function_coverage=1 00:04:43.384 --rc genhtml_legend=1 00:04:43.384 --rc geninfo_all_blocks=1 00:04:43.384 --rc geninfo_unexecuted_blocks=1 00:04:43.384 00:04:43.384 ' 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:43.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.384 --rc genhtml_branch_coverage=1 00:04:43.384 --rc genhtml_function_coverage=1 00:04:43.384 --rc genhtml_legend=1 00:04:43.384 --rc geninfo_all_blocks=1 00:04:43.384 --rc geninfo_unexecuted_blocks=1 00:04:43.384 00:04:43.384 ' 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:43.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.384 --rc genhtml_branch_coverage=1 00:04:43.384 --rc genhtml_function_coverage=1 00:04:43.384 --rc genhtml_legend=1 00:04:43.384 --rc geninfo_all_blocks=1 00:04:43.384 --rc geninfo_unexecuted_blocks=1 00:04:43.384 00:04:43.384 ' 00:04:43.384 17:46:01 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:43.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.384 --rc genhtml_branch_coverage=1 00:04:43.384 --rc genhtml_function_coverage=1 00:04:43.384 --rc genhtml_legend=1 00:04:43.384 --rc geninfo_all_blocks=1 00:04:43.384 --rc geninfo_unexecuted_blocks=1 00:04:43.384 00:04:43.384 ' 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:991536d8-8d7e-47ec-ad25-340c17aae998 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=991536d8-8d7e-47ec-ad25-340c17aae998 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.384 17:46:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.384 17:46:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.384 17:46:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.384 17:46:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.384 17:46:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:43.384 17:46:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.384 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.384 17:46:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:43.384 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:43.385 INFO: launching applications... 00:04:43.385 17:46:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57571 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.385 Waiting for target to run... 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57571 /var/tmp/spdk_tgt.sock 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57571 ']' 00:04:43.385 17:46:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.385 17:46:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.645 [2024-10-25 17:46:01.903696] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:43.645 [2024-10-25 17:46:01.903901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57571 ] 00:04:43.905 [2024-10-25 17:46:02.284367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.165 [2024-10-25 17:46:02.381997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.736 17:46:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.736 17:46:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:44.736 00:04:44.736 INFO: shutting down applications... 00:04:44.736 17:46:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.736 17:46:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57571 ]] 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57571 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:44.736 17:46:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.307 17:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.307 17:46:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.307 17:46:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:45.307 17:46:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.878 17:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.878 17:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.878 17:46:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:45.878 17:46:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.448 17:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.448 17:46:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.448 17:46:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:46.448 17:46:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.708 17:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.708 17:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.708 17:46:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:46.708 17:46:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.280 17:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.280 17:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.280 17:46:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:47.280 17:46:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57571 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.851 SPDK target shutdown done 00:04:47.851 17:46:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.851 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.851 Success 00:04:47.851 ************************************ 00:04:47.851 END TEST json_config_extra_key 00:04:47.851 ************************************ 00:04:47.851 00:04:47.851 real 0m4.542s 00:04:47.851 user 0m3.864s 00:04:47.851 sys 0m0.554s 00:04:47.851 17:46:06 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.851 17:46:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.851 17:46:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.851 17:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.851 17:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.851 17:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:47.851 ************************************ 00:04:47.851 START TEST alias_rpc 00:04:47.851 ************************************ 00:04:47.851 17:46:06 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.112 * Looking for test storage... 00:04:48.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.112 17:46:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.112 --rc genhtml_branch_coverage=1 00:04:48.112 --rc genhtml_function_coverage=1 00:04:48.112 --rc genhtml_legend=1 00:04:48.112 --rc geninfo_all_blocks=1 00:04:48.112 --rc geninfo_unexecuted_blocks=1 00:04:48.112 00:04:48.112 ' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.112 --rc genhtml_branch_coverage=1 00:04:48.112 --rc genhtml_function_coverage=1 00:04:48.112 --rc genhtml_legend=1 00:04:48.112 --rc geninfo_all_blocks=1 00:04:48.112 --rc geninfo_unexecuted_blocks=1 00:04:48.112 00:04:48.112 ' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.112 --rc genhtml_branch_coverage=1 00:04:48.112 --rc genhtml_function_coverage=1 00:04:48.112 --rc genhtml_legend=1 00:04:48.112 --rc geninfo_all_blocks=1 00:04:48.112 --rc geninfo_unexecuted_blocks=1 00:04:48.112 00:04:48.112 ' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.112 --rc genhtml_branch_coverage=1 00:04:48.112 --rc genhtml_function_coverage=1 00:04:48.112 --rc genhtml_legend=1 00:04:48.112 --rc geninfo_all_blocks=1 00:04:48.112 --rc geninfo_unexecuted_blocks=1 00:04:48.112 00:04:48.112 ' 00:04:48.112 17:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.112 17:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57682 00:04:48.112 17:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.112 17:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57682 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57682 ']' 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.112 17:46:06 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.113 17:46:06 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.113 17:46:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.113 [2024-10-25 17:46:06.510842] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:48.113 [2024-10-25 17:46:06.511028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57682 ] 00:04:48.372 [2024-10-25 17:46:06.682301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.372 [2024-10-25 17:46:06.792531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.312 17:46:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.312 17:46:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.312 17:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.571 17:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57682 00:04:49.571 17:46:07 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57682 ']' 00:04:49.571 17:46:07 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57682 00:04:49.571 17:46:07 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:49.571 17:46:07 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57682 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57682' 00:04:49.572 killing process with pid 57682 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@969 -- # kill 57682 00:04:49.572 17:46:07 alias_rpc -- common/autotest_common.sh@974 -- # wait 57682 00:04:52.111 00:04:52.111 real 0m3.950s 00:04:52.111 user 0m3.904s 00:04:52.111 sys 0m0.563s 00:04:52.111 17:46:10 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.111 17:46:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.111 ************************************ 00:04:52.111 END TEST alias_rpc 00:04:52.111 ************************************ 00:04:52.111 17:46:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:52.111 17:46:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.111 17:46:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.111 17:46:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.111 17:46:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.111 ************************************ 00:04:52.111 START TEST spdkcli_tcp 00:04:52.111 ************************************ 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.111 * Looking for test storage... 00:04:52.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.111 17:46:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:52.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.111 --rc genhtml_branch_coverage=1 00:04:52.111 --rc genhtml_function_coverage=1 00:04:52.111 --rc genhtml_legend=1 00:04:52.111 --rc geninfo_all_blocks=1 00:04:52.111 --rc geninfo_unexecuted_blocks=1 00:04:52.111 00:04:52.111 ' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:52.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.111 --rc genhtml_branch_coverage=1 00:04:52.111 --rc genhtml_function_coverage=1 00:04:52.111 --rc genhtml_legend=1 00:04:52.111 --rc geninfo_all_blocks=1 00:04:52.111 --rc geninfo_unexecuted_blocks=1 00:04:52.111 00:04:52.111 ' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:52.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.111 --rc genhtml_branch_coverage=1 00:04:52.111 --rc genhtml_function_coverage=1 00:04:52.111 --rc genhtml_legend=1 00:04:52.111 --rc geninfo_all_blocks=1 00:04:52.111 --rc geninfo_unexecuted_blocks=1 00:04:52.111 00:04:52.111 ' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:52.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.111 --rc genhtml_branch_coverage=1 00:04:52.111 --rc genhtml_function_coverage=1 00:04:52.111 --rc genhtml_legend=1 00:04:52.111 --rc geninfo_all_blocks=1 00:04:52.111 --rc geninfo_unexecuted_blocks=1 00:04:52.111 00:04:52.111 ' 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57789 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:52.111 17:46:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57789 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57789 ']' 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.111 17:46:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.111 [2024-10-25 17:46:10.544002] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:52.111 [2024-10-25 17:46:10.544256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57789 ] 00:04:52.371 [2024-10-25 17:46:10.722282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.630 [2024-10-25 17:46:10.829948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.630 [2024-10-25 17:46:10.829992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57806 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:53.569 [ 00:04:53.569 "bdev_malloc_delete", 00:04:53.569 "bdev_malloc_create", 00:04:53.569 "bdev_null_resize", 00:04:53.569 "bdev_null_delete", 00:04:53.569 "bdev_null_create", 00:04:53.569 "bdev_nvme_cuse_unregister", 00:04:53.569 "bdev_nvme_cuse_register", 00:04:53.569 "bdev_opal_new_user", 00:04:53.569 "bdev_opal_set_lock_state", 00:04:53.569 "bdev_opal_delete", 00:04:53.569 "bdev_opal_get_info", 00:04:53.569 "bdev_opal_create", 00:04:53.569 "bdev_nvme_opal_revert", 00:04:53.569 "bdev_nvme_opal_init", 00:04:53.569 "bdev_nvme_send_cmd", 00:04:53.569 "bdev_nvme_set_keys", 00:04:53.569 "bdev_nvme_get_path_iostat", 00:04:53.569 "bdev_nvme_get_mdns_discovery_info", 00:04:53.569 "bdev_nvme_stop_mdns_discovery", 00:04:53.569 "bdev_nvme_start_mdns_discovery", 00:04:53.569 "bdev_nvme_set_multipath_policy", 00:04:53.569 "bdev_nvme_set_preferred_path", 00:04:53.569 "bdev_nvme_get_io_paths", 00:04:53.569 "bdev_nvme_remove_error_injection", 00:04:53.569 "bdev_nvme_add_error_injection", 00:04:53.569 "bdev_nvme_get_discovery_info", 00:04:53.569 "bdev_nvme_stop_discovery", 00:04:53.569 "bdev_nvme_start_discovery", 00:04:53.569 "bdev_nvme_get_controller_health_info", 00:04:53.569 "bdev_nvme_disable_controller", 00:04:53.569 "bdev_nvme_enable_controller", 00:04:53.569 "bdev_nvme_reset_controller", 00:04:53.569 "bdev_nvme_get_transport_statistics", 00:04:53.569 "bdev_nvme_apply_firmware", 00:04:53.569 "bdev_nvme_detach_controller", 00:04:53.569 "bdev_nvme_get_controllers", 00:04:53.569 "bdev_nvme_attach_controller", 00:04:53.569 "bdev_nvme_set_hotplug", 00:04:53.569 "bdev_nvme_set_options", 00:04:53.569 "bdev_passthru_delete", 00:04:53.569 "bdev_passthru_create", 00:04:53.569 "bdev_lvol_set_parent_bdev", 00:04:53.569 "bdev_lvol_set_parent", 00:04:53.569 "bdev_lvol_check_shallow_copy", 00:04:53.569 "bdev_lvol_start_shallow_copy", 00:04:53.569 "bdev_lvol_grow_lvstore", 00:04:53.569 "bdev_lvol_get_lvols", 00:04:53.569 "bdev_lvol_get_lvstores", 00:04:53.569 "bdev_lvol_delete", 00:04:53.569 "bdev_lvol_set_read_only", 00:04:53.569 "bdev_lvol_resize", 00:04:53.569 "bdev_lvol_decouple_parent", 00:04:53.569 "bdev_lvol_inflate", 00:04:53.569 "bdev_lvol_rename", 00:04:53.569 "bdev_lvol_clone_bdev", 00:04:53.569 "bdev_lvol_clone", 00:04:53.569 "bdev_lvol_snapshot", 00:04:53.569 "bdev_lvol_create", 00:04:53.569 "bdev_lvol_delete_lvstore", 00:04:53.569 "bdev_lvol_rename_lvstore", 00:04:53.569 "bdev_lvol_create_lvstore", 00:04:53.569 "bdev_raid_set_options", 00:04:53.569 "bdev_raid_remove_base_bdev", 00:04:53.569 "bdev_raid_add_base_bdev", 00:04:53.569 "bdev_raid_delete", 00:04:53.569 "bdev_raid_create", 00:04:53.569 "bdev_raid_get_bdevs", 00:04:53.569 "bdev_error_inject_error", 00:04:53.569 "bdev_error_delete", 00:04:53.569 "bdev_error_create", 00:04:53.569 "bdev_split_delete", 00:04:53.569 "bdev_split_create", 00:04:53.569 "bdev_delay_delete", 00:04:53.569 "bdev_delay_create", 00:04:53.569 "bdev_delay_update_latency", 00:04:53.569 "bdev_zone_block_delete", 00:04:53.569 "bdev_zone_block_create", 00:04:53.569 "blobfs_create", 00:04:53.569 "blobfs_detect", 00:04:53.569 "blobfs_set_cache_size", 00:04:53.569 "bdev_aio_delete", 00:04:53.569 "bdev_aio_rescan", 00:04:53.569 "bdev_aio_create", 00:04:53.569 "bdev_ftl_set_property", 00:04:53.569 "bdev_ftl_get_properties", 00:04:53.569 "bdev_ftl_get_stats", 00:04:53.569 "bdev_ftl_unmap", 00:04:53.569 "bdev_ftl_unload", 00:04:53.569 "bdev_ftl_delete", 00:04:53.569 "bdev_ftl_load", 00:04:53.569 "bdev_ftl_create", 00:04:53.569 "bdev_virtio_attach_controller", 00:04:53.569 "bdev_virtio_scsi_get_devices", 00:04:53.569 "bdev_virtio_detach_controller", 00:04:53.569 "bdev_virtio_blk_set_hotplug", 00:04:53.569 "bdev_iscsi_delete", 00:04:53.569 "bdev_iscsi_create", 00:04:53.569 "bdev_iscsi_set_options", 00:04:53.569 "accel_error_inject_error", 00:04:53.569 "ioat_scan_accel_module", 00:04:53.569 "dsa_scan_accel_module", 00:04:53.569 "iaa_scan_accel_module", 00:04:53.569 "keyring_file_remove_key", 00:04:53.569 "keyring_file_add_key", 00:04:53.569 "keyring_linux_set_options", 00:04:53.569 "fsdev_aio_delete", 00:04:53.569 "fsdev_aio_create", 00:04:53.569 "iscsi_get_histogram", 00:04:53.569 "iscsi_enable_histogram", 00:04:53.569 "iscsi_set_options", 00:04:53.569 "iscsi_get_auth_groups", 00:04:53.569 "iscsi_auth_group_remove_secret", 00:04:53.569 "iscsi_auth_group_add_secret", 00:04:53.569 "iscsi_delete_auth_group", 00:04:53.569 "iscsi_create_auth_group", 00:04:53.569 "iscsi_set_discovery_auth", 00:04:53.569 "iscsi_get_options", 00:04:53.569 "iscsi_target_node_request_logout", 00:04:53.569 "iscsi_target_node_set_redirect", 00:04:53.569 "iscsi_target_node_set_auth", 00:04:53.569 "iscsi_target_node_add_lun", 00:04:53.569 "iscsi_get_stats", 00:04:53.569 "iscsi_get_connections", 00:04:53.569 "iscsi_portal_group_set_auth", 00:04:53.569 "iscsi_start_portal_group", 00:04:53.569 "iscsi_delete_portal_group", 00:04:53.569 "iscsi_create_portal_group", 00:04:53.569 "iscsi_get_portal_groups", 00:04:53.569 "iscsi_delete_target_node", 00:04:53.569 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.569 "iscsi_target_node_add_pg_ig_maps", 00:04:53.569 "iscsi_create_target_node", 00:04:53.569 "iscsi_get_target_nodes", 00:04:53.569 "iscsi_delete_initiator_group", 00:04:53.569 "iscsi_initiator_group_remove_initiators", 00:04:53.569 "iscsi_initiator_group_add_initiators", 00:04:53.569 "iscsi_create_initiator_group", 00:04:53.569 "iscsi_get_initiator_groups", 00:04:53.569 "nvmf_set_crdt", 00:04:53.569 "nvmf_set_config", 00:04:53.569 "nvmf_set_max_subsystems", 00:04:53.569 "nvmf_stop_mdns_prr", 00:04:53.569 "nvmf_publish_mdns_prr", 00:04:53.569 "nvmf_subsystem_get_listeners", 00:04:53.569 "nvmf_subsystem_get_qpairs", 00:04:53.569 "nvmf_subsystem_get_controllers", 00:04:53.569 "nvmf_get_stats", 00:04:53.569 "nvmf_get_transports", 00:04:53.569 "nvmf_create_transport", 00:04:53.569 "nvmf_get_targets", 00:04:53.569 "nvmf_delete_target", 00:04:53.569 "nvmf_create_target", 00:04:53.569 "nvmf_subsystem_allow_any_host", 00:04:53.569 "nvmf_subsystem_set_keys", 00:04:53.569 "nvmf_subsystem_remove_host", 00:04:53.569 "nvmf_subsystem_add_host", 00:04:53.569 "nvmf_ns_remove_host", 00:04:53.569 "nvmf_ns_add_host", 00:04:53.569 "nvmf_subsystem_remove_ns", 00:04:53.569 "nvmf_subsystem_set_ns_ana_group", 00:04:53.569 "nvmf_subsystem_add_ns", 00:04:53.569 "nvmf_subsystem_listener_set_ana_state", 00:04:53.569 "nvmf_discovery_get_referrals", 00:04:53.569 "nvmf_discovery_remove_referral", 00:04:53.569 "nvmf_discovery_add_referral", 00:04:53.569 "nvmf_subsystem_remove_listener", 00:04:53.569 "nvmf_subsystem_add_listener", 00:04:53.569 "nvmf_delete_subsystem", 00:04:53.569 "nvmf_create_subsystem", 00:04:53.569 "nvmf_get_subsystems", 00:04:53.569 "env_dpdk_get_mem_stats", 00:04:53.569 "nbd_get_disks", 00:04:53.569 "nbd_stop_disk", 00:04:53.569 "nbd_start_disk", 00:04:53.569 "ublk_recover_disk", 00:04:53.569 "ublk_get_disks", 00:04:53.569 "ublk_stop_disk", 00:04:53.569 "ublk_start_disk", 00:04:53.569 "ublk_destroy_target", 00:04:53.569 "ublk_create_target", 00:04:53.569 "virtio_blk_create_transport", 00:04:53.569 "virtio_blk_get_transports", 00:04:53.569 "vhost_controller_set_coalescing", 00:04:53.569 "vhost_get_controllers", 00:04:53.569 "vhost_delete_controller", 00:04:53.569 "vhost_create_blk_controller", 00:04:53.569 "vhost_scsi_controller_remove_target", 00:04:53.569 "vhost_scsi_controller_add_target", 00:04:53.569 "vhost_start_scsi_controller", 00:04:53.569 "vhost_create_scsi_controller", 00:04:53.569 "thread_set_cpumask", 00:04:53.569 "scheduler_set_options", 00:04:53.569 "framework_get_governor", 00:04:53.569 "framework_get_scheduler", 00:04:53.569 "framework_set_scheduler", 00:04:53.569 "framework_get_reactors", 00:04:53.569 "thread_get_io_channels", 00:04:53.569 "thread_get_pollers", 00:04:53.569 "thread_get_stats", 00:04:53.569 "framework_monitor_context_switch", 00:04:53.569 "spdk_kill_instance", 00:04:53.569 "log_enable_timestamps", 00:04:53.569 "log_get_flags", 00:04:53.569 "log_clear_flag", 00:04:53.569 "log_set_flag", 00:04:53.569 "log_get_level", 00:04:53.569 "log_set_level", 00:04:53.569 "log_get_print_level", 00:04:53.569 "log_set_print_level", 00:04:53.569 "framework_enable_cpumask_locks", 00:04:53.569 "framework_disable_cpumask_locks", 00:04:53.569 "framework_wait_init", 00:04:53.569 "framework_start_init", 00:04:53.569 "scsi_get_devices", 00:04:53.569 "bdev_get_histogram", 00:04:53.569 "bdev_enable_histogram", 00:04:53.569 "bdev_set_qos_limit", 00:04:53.569 "bdev_set_qd_sampling_period", 00:04:53.569 "bdev_get_bdevs", 00:04:53.569 "bdev_reset_iostat", 00:04:53.569 "bdev_get_iostat", 00:04:53.569 "bdev_examine", 00:04:53.569 "bdev_wait_for_examine", 00:04:53.569 "bdev_set_options", 00:04:53.569 "accel_get_stats", 00:04:53.569 "accel_set_options", 00:04:53.569 "accel_set_driver", 00:04:53.569 "accel_crypto_key_destroy", 00:04:53.569 "accel_crypto_keys_get", 00:04:53.569 "accel_crypto_key_create", 00:04:53.569 "accel_assign_opc", 00:04:53.569 "accel_get_module_info", 00:04:53.569 "accel_get_opc_assignments", 00:04:53.569 "vmd_rescan", 00:04:53.569 "vmd_remove_device", 00:04:53.569 "vmd_enable", 00:04:53.569 "sock_get_default_impl", 00:04:53.569 "sock_set_default_impl", 00:04:53.569 "sock_impl_set_options", 00:04:53.569 "sock_impl_get_options", 00:04:53.569 "iobuf_get_stats", 00:04:53.569 "iobuf_set_options", 00:04:53.569 "keyring_get_keys", 00:04:53.569 "framework_get_pci_devices", 00:04:53.569 "framework_get_config", 00:04:53.569 "framework_get_subsystems", 00:04:53.569 "fsdev_set_opts", 00:04:53.569 "fsdev_get_opts", 00:04:53.569 "trace_get_info", 00:04:53.569 "trace_get_tpoint_group_mask", 00:04:53.569 "trace_disable_tpoint_group", 00:04:53.569 "trace_enable_tpoint_group", 00:04:53.569 "trace_clear_tpoint_mask", 00:04:53.569 "trace_set_tpoint_mask", 00:04:53.569 "notify_get_notifications", 00:04:53.569 "notify_get_types", 00:04:53.569 "spdk_get_version", 00:04:53.569 "rpc_get_methods" 00:04:53.569 ] 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:53.569 17:46:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57789 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57789 ']' 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57789 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57789 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.569 killing process with pid 57789 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57789' 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57789 00:04:53.569 17:46:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57789 00:04:56.111 ************************************ 00:04:56.111 END TEST spdkcli_tcp 00:04:56.111 ************************************ 00:04:56.111 00:04:56.111 real 0m4.012s 00:04:56.111 user 0m7.060s 00:04:56.111 sys 0m0.647s 00:04:56.111 17:46:14 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.111 17:46:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.111 17:46:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.111 17:46:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.111 17:46:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.111 17:46:14 -- common/autotest_common.sh@10 -- # set +x 00:04:56.111 ************************************ 00:04:56.111 START TEST dpdk_mem_utility 00:04:56.111 ************************************ 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.111 * Looking for test storage... 00:04:56.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.111 17:46:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:56.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.111 --rc genhtml_branch_coverage=1 00:04:56.111 --rc genhtml_function_coverage=1 00:04:56.111 --rc genhtml_legend=1 00:04:56.111 --rc geninfo_all_blocks=1 00:04:56.111 --rc geninfo_unexecuted_blocks=1 00:04:56.111 00:04:56.111 ' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:56.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.111 --rc genhtml_branch_coverage=1 00:04:56.111 --rc genhtml_function_coverage=1 00:04:56.111 --rc genhtml_legend=1 00:04:56.111 --rc geninfo_all_blocks=1 00:04:56.111 --rc geninfo_unexecuted_blocks=1 00:04:56.111 00:04:56.111 ' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:56.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.111 --rc genhtml_branch_coverage=1 00:04:56.111 --rc genhtml_function_coverage=1 00:04:56.111 --rc genhtml_legend=1 00:04:56.111 --rc geninfo_all_blocks=1 00:04:56.111 --rc geninfo_unexecuted_blocks=1 00:04:56.111 00:04:56.111 ' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:56.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.111 --rc genhtml_branch_coverage=1 00:04:56.111 --rc genhtml_function_coverage=1 00:04:56.111 --rc genhtml_legend=1 00:04:56.111 --rc geninfo_all_blocks=1 00:04:56.111 --rc geninfo_unexecuted_blocks=1 00:04:56.111 00:04:56.111 ' 00:04:56.111 17:46:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.111 17:46:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57906 00:04:56.111 17:46:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.111 17:46:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57906 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57906 ']' 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.111 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.112 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.112 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.112 17:46:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.372 [2024-10-25 17:46:14.612504] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:56.372 [2024-10-25 17:46:14.612710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:04:56.372 [2024-10-25 17:46:14.776154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.632 [2024-10-25 17:46:14.883052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.576 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.576 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:57.576 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.576 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.576 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.576 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.576 { 00:04:57.576 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.576 } 00:04:57.576 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.576 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.576 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:57.576 1 heaps totaling size 816.000000 MiB 00:04:57.576 size: 816.000000 MiB heap id: 0 00:04:57.576 end heaps---------- 00:04:57.576 9 mempools totaling size 595.772034 MiB 00:04:57.576 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.576 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.576 size: 92.545471 MiB name: bdev_io_57906 00:04:57.576 size: 50.003479 MiB name: msgpool_57906 00:04:57.576 size: 36.509338 MiB name: fsdev_io_57906 00:04:57.576 size: 21.763794 MiB name: PDU_Pool 00:04:57.576 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.576 size: 4.133484 MiB name: evtpool_57906 00:04:57.576 size: 0.026123 MiB name: Session_Pool 00:04:57.576 end mempools------- 00:04:57.576 6 memzones totaling size 4.142822 MiB 00:04:57.576 size: 1.000366 MiB name: RG_ring_0_57906 00:04:57.576 size: 1.000366 MiB name: RG_ring_1_57906 00:04:57.576 size: 1.000366 MiB name: RG_ring_4_57906 00:04:57.576 size: 1.000366 MiB name: RG_ring_5_57906 00:04:57.576 size: 0.125366 MiB name: RG_ring_2_57906 00:04:57.576 size: 0.015991 MiB name: RG_ring_3_57906 00:04:57.576 end memzones------- 00:04:57.576 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.576 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:04:57.576 list of free elements. size: 16.790649 MiB 00:04:57.576 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:57.576 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:57.576 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:57.576 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:57.576 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:57.576 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:57.576 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:57.576 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:57.576 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:57.576 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:57.576 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:57.576 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:04:57.576 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:57.576 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:57.576 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:57.576 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:57.576 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:57.576 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:57.576 list of standard malloc elements. size: 199.288452 MiB 00:04:57.576 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:57.576 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:57.576 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:57.576 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:57.576 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:57.576 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:57.576 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:57.576 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:57.576 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:57.576 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:57.576 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:57.576 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:57.576 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:57.576 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:57.576 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:57.577 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:57.577 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:57.577 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:57.578 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:57.578 list of memzone associated elements. size: 599.920898 MiB 00:04:57.578 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:57.578 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.578 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:57.578 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.578 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:57.578 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57906_0 00:04:57.578 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:57.578 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57906_0 00:04:57.578 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:57.578 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57906_0 00:04:57.578 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:57.578 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.578 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:57.578 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.578 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:57.578 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57906_0 00:04:57.578 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:57.578 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57906 00:04:57.578 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:57.578 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57906 00:04:57.578 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:57.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.578 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:57.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.578 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:57.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.578 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:57.578 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.578 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:57.578 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57906 00:04:57.578 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:57.578 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57906 00:04:57.578 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:57.578 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57906 00:04:57.578 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:57.578 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57906 00:04:57.578 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:57.578 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57906 00:04:57.578 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:57.578 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57906 00:04:57.578 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:57.578 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.578 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:57.578 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.578 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:57.578 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.578 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:57.578 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57906 00:04:57.578 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:57.578 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57906 00:04:57.578 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:57.578 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.578 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:57.578 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.578 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:57.578 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57906 00:04:57.578 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:57.578 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.578 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:57.578 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57906 00:04:57.578 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:57.578 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57906 00:04:57.578 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:57.578 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57906 00:04:57.578 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:57.578 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.578 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.578 17:46:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57906 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57906 ']' 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57906 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57906 00:04:57.578 killing process with pid 57906 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57906' 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57906 00:04:57.578 17:46:15 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57906 00:05:00.157 00:05:00.157 real 0m3.784s 00:05:00.157 user 0m3.689s 00:05:00.157 sys 0m0.515s 00:05:00.157 17:46:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.157 ************************************ 00:05:00.157 END TEST dpdk_mem_utility 00:05:00.157 ************************************ 00:05:00.157 17:46:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.157 17:46:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.157 17:46:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.157 17:46:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.157 17:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.157 ************************************ 00:05:00.157 START TEST event 00:05:00.157 ************************************ 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.157 * Looking for test storage... 00:05:00.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1689 -- # lcov --version 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:00.157 17:46:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.157 17:46:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.157 17:46:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.157 17:46:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.157 17:46:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.157 17:46:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.157 17:46:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.157 17:46:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.157 17:46:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.157 17:46:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.157 17:46:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.157 17:46:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:00.157 17:46:18 event -- scripts/common.sh@345 -- # : 1 00:05:00.157 17:46:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.157 17:46:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.157 17:46:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:00.157 17:46:18 event -- scripts/common.sh@353 -- # local d=1 00:05:00.157 17:46:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.157 17:46:18 event -- scripts/common.sh@355 -- # echo 1 00:05:00.157 17:46:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.157 17:46:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:00.157 17:46:18 event -- scripts/common.sh@353 -- # local d=2 00:05:00.157 17:46:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.157 17:46:18 event -- scripts/common.sh@355 -- # echo 2 00:05:00.157 17:46:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.157 17:46:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.157 17:46:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.157 17:46:18 event -- scripts/common.sh@368 -- # return 0 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.157 --rc genhtml_branch_coverage=1 00:05:00.157 --rc genhtml_function_coverage=1 00:05:00.157 --rc genhtml_legend=1 00:05:00.157 --rc geninfo_all_blocks=1 00:05:00.157 --rc geninfo_unexecuted_blocks=1 00:05:00.157 00:05:00.157 ' 00:05:00.157 17:46:18 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.157 --rc genhtml_branch_coverage=1 00:05:00.157 --rc genhtml_function_coverage=1 00:05:00.157 --rc genhtml_legend=1 00:05:00.157 --rc geninfo_all_blocks=1 00:05:00.157 --rc geninfo_unexecuted_blocks=1 00:05:00.158 00:05:00.158 ' 00:05:00.158 17:46:18 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:00.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.158 --rc genhtml_branch_coverage=1 00:05:00.158 --rc genhtml_function_coverage=1 00:05:00.158 --rc genhtml_legend=1 00:05:00.158 --rc geninfo_all_blocks=1 00:05:00.158 --rc geninfo_unexecuted_blocks=1 00:05:00.158 00:05:00.158 ' 00:05:00.158 17:46:18 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:00.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.158 --rc genhtml_branch_coverage=1 00:05:00.158 --rc genhtml_function_coverage=1 00:05:00.158 --rc genhtml_legend=1 00:05:00.158 --rc geninfo_all_blocks=1 00:05:00.158 --rc geninfo_unexecuted_blocks=1 00:05:00.158 00:05:00.158 ' 00:05:00.158 17:46:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.158 17:46:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.158 17:46:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.158 17:46:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:00.158 17:46:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.158 17:46:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.158 ************************************ 00:05:00.158 START TEST event_perf 00:05:00.158 ************************************ 00:05:00.158 17:46:18 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.158 Running I/O for 1 seconds...[2024-10-25 17:46:18.435533] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:00.158 [2024-10-25 17:46:18.435691] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:05:00.418 [2024-10-25 17:46:18.607427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.418 [2024-10-25 17:46:18.717662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.418 [2024-10-25 17:46:18.717950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.418 [2024-10-25 17:46:18.717953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.418 [2024-10-25 17:46:18.717987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.801 Running I/O for 1 seconds... 00:05:01.801 lcore 0: 106574 00:05:01.801 lcore 1: 106573 00:05:01.801 lcore 2: 106570 00:05:01.801 lcore 3: 106573 00:05:01.801 done. 00:05:01.801 00:05:01.801 real 0m1.566s 00:05:01.801 user 0m4.308s 00:05:01.801 sys 0m0.122s 00:05:01.801 17:46:19 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.801 ************************************ 00:05:01.801 END TEST event_perf 00:05:01.801 ************************************ 00:05:01.801 17:46:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.801 17:46:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.801 17:46:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:01.801 17:46:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.801 17:46:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.801 ************************************ 00:05:01.801 START TEST event_reactor 00:05:01.801 ************************************ 00:05:01.801 17:46:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.801 [2024-10-25 17:46:20.069145] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:01.801 [2024-10-25 17:46:20.069333] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58053 ] 00:05:02.061 [2024-10-25 17:46:20.246428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.061 [2024-10-25 17:46:20.354614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.445 test_start 00:05:03.445 oneshot 00:05:03.445 tick 100 00:05:03.445 tick 100 00:05:03.445 tick 250 00:05:03.445 tick 100 00:05:03.445 tick 100 00:05:03.445 tick 100 00:05:03.445 tick 250 00:05:03.445 tick 500 00:05:03.445 tick 100 00:05:03.445 tick 100 00:05:03.445 tick 250 00:05:03.445 tick 100 00:05:03.445 tick 100 00:05:03.445 test_end 00:05:03.445 00:05:03.445 real 0m1.548s 00:05:03.445 user 0m1.350s 00:05:03.445 sys 0m0.090s 00:05:03.445 17:46:21 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.445 17:46:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 ************************************ 00:05:03.445 END TEST event_reactor 00:05:03.445 ************************************ 00:05:03.445 17:46:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.445 17:46:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:03.445 17:46:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.445 17:46:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 ************************************ 00:05:03.445 START TEST event_reactor_perf 00:05:03.445 ************************************ 00:05:03.445 17:46:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.445 [2024-10-25 17:46:21.677332] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:03.445 [2024-10-25 17:46:21.677482] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58090 ] 00:05:03.445 [2024-10-25 17:46:21.851303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.705 [2024-10-25 17:46:21.962364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.088 test_start 00:05:05.088 test_end 00:05:05.088 Performance: 406967 events per second 00:05:05.088 ************************************ 00:05:05.088 END TEST event_reactor_perf 00:05:05.088 ************************************ 00:05:05.088 00:05:05.088 real 0m1.550s 00:05:05.088 user 0m1.337s 00:05:05.088 sys 0m0.104s 00:05:05.088 17:46:23 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.088 17:46:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 17:46:23 event -- event/event.sh@49 -- # uname -s 00:05:05.088 17:46:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.088 17:46:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.088 17:46:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.088 17:46:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.088 17:46:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 ************************************ 00:05:05.088 START TEST event_scheduler 00:05:05.088 ************************************ 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.088 * Looking for test storage... 00:05:05.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.088 17:46:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.088 --rc genhtml_branch_coverage=1 00:05:05.088 --rc genhtml_function_coverage=1 00:05:05.088 --rc genhtml_legend=1 00:05:05.088 --rc geninfo_all_blocks=1 00:05:05.088 --rc geninfo_unexecuted_blocks=1 00:05:05.088 00:05:05.088 ' 00:05:05.088 17:46:23 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.089 --rc genhtml_branch_coverage=1 00:05:05.089 --rc genhtml_function_coverage=1 00:05:05.089 --rc genhtml_legend=1 00:05:05.089 --rc geninfo_all_blocks=1 00:05:05.089 --rc geninfo_unexecuted_blocks=1 00:05:05.089 00:05:05.089 ' 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.089 --rc genhtml_branch_coverage=1 00:05:05.089 --rc genhtml_function_coverage=1 00:05:05.089 --rc genhtml_legend=1 00:05:05.089 --rc geninfo_all_blocks=1 00:05:05.089 --rc geninfo_unexecuted_blocks=1 00:05:05.089 00:05:05.089 ' 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.089 --rc genhtml_branch_coverage=1 00:05:05.089 --rc genhtml_function_coverage=1 00:05:05.089 --rc genhtml_legend=1 00:05:05.089 --rc geninfo_all_blocks=1 00:05:05.089 --rc geninfo_unexecuted_blocks=1 00:05:05.089 00:05:05.089 ' 00:05:05.089 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.089 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58166 00:05:05.089 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.089 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.089 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58166 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58166 ']' 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.089 17:46:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.349 [2024-10-25 17:46:23.576089] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:05.349 [2024-10-25 17:46:23.576317] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:05:05.349 [2024-10-25 17:46:23.751281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.608 [2024-10-25 17:46:23.898969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.608 [2024-10-25 17:46:23.899151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.608 [2024-10-25 17:46:23.899322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.608 [2024-10-25 17:46:23.899399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.179 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.179 POWER: Cannot set governor of lcore 0 to performance 00:05:06.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.179 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.179 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.179 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.179 [2024-10-25 17:46:24.400354] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.179 [2024-10-25 17:46:24.400380] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.179 [2024-10-25 17:46:24.400391] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.179 [2024-10-25 17:46:24.400415] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.179 [2024-10-25 17:46:24.400424] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.179 [2024-10-25 17:46:24.400435] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.179 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.179 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 [2024-10-25 17:46:24.777219] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.440 17:46:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.440 17:46:24 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.440 17:46:24 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 ************************************ 00:05:06.440 START TEST scheduler_create_thread 00:05:06.440 ************************************ 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 2 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 3 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 4 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 5 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 6 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.440 7 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.440 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.700 8 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.700 9 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.700 10 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.700 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.095 17:46:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.095 17:46:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.095 17:46:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.095 17:46:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.095 17:46:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.664 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.664 17:46:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:08.664 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.665 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.604 17:46:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.604 17:46:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.604 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.604 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.542 ************************************ 00:05:10.542 END TEST scheduler_create_thread 00:05:10.542 ************************************ 00:05:10.542 17:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.542 00:05:10.542 real 0m3.885s 00:05:10.542 user 0m0.028s 00:05:10.542 sys 0m0.010s 00:05:10.542 17:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.542 17:46:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.543 17:46:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.543 17:46:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58166 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58166 ']' 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58166 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58166 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58166' 00:05:10.543 killing process with pid 58166 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58166 00:05:10.543 17:46:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58166 00:05:10.802 [2024-10-25 17:46:29.055632] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.185 00:05:12.185 real 0m7.003s 00:05:12.185 user 0m14.224s 00:05:12.185 sys 0m0.595s 00:05:12.185 17:46:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.185 17:46:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.185 ************************************ 00:05:12.185 END TEST event_scheduler 00:05:12.185 ************************************ 00:05:12.185 17:46:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.185 17:46:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.185 17:46:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.185 17:46:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.185 17:46:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.185 ************************************ 00:05:12.185 START TEST app_repeat 00:05:12.185 ************************************ 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58283 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58283' 00:05:12.185 Process app_repeat pid: 58283 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.185 spdk_app_start Round 0 00:05:12.185 17:46:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58283 ']' 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.185 17:46:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.185 [2024-10-25 17:46:30.402684] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:12.185 [2024-10-25 17:46:30.402865] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:05:12.185 [2024-10-25 17:46:30.583003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.445 [2024-10-25 17:46:30.696600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.445 [2024-10-25 17:46:30.696633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.016 17:46:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.016 17:46:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:13.016 17:46:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.275 Malloc0 00:05:13.275 17:46:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.535 Malloc1 00:05:13.535 17:46:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.535 17:46:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.796 /dev/nbd0 00:05:13.796 17:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.796 17:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.796 1+0 records in 00:05:13.796 1+0 records out 00:05:13.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545395 s, 7.5 MB/s 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:13.796 17:46:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:13.796 17:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.796 17:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.796 17:46:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.058 /dev/nbd1 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.058 1+0 records in 00:05:14.058 1+0 records out 00:05:14.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218681 s, 18.7 MB/s 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.058 17:46:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.058 17:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.325 { 00:05:14.325 "nbd_device": "/dev/nbd0", 00:05:14.325 "bdev_name": "Malloc0" 00:05:14.325 }, 00:05:14.325 { 00:05:14.325 "nbd_device": "/dev/nbd1", 00:05:14.325 "bdev_name": "Malloc1" 00:05:14.325 } 00:05:14.325 ]' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.325 { 00:05:14.325 "nbd_device": "/dev/nbd0", 00:05:14.325 "bdev_name": "Malloc0" 00:05:14.325 }, 00:05:14.325 { 00:05:14.325 "nbd_device": "/dev/nbd1", 00:05:14.325 "bdev_name": "Malloc1" 00:05:14.325 } 00:05:14.325 ]' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.325 /dev/nbd1' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.325 /dev/nbd1' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.325 256+0 records in 00:05:14.325 256+0 records out 00:05:14.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014163 s, 74.0 MB/s 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.325 256+0 records in 00:05:14.325 256+0 records out 00:05:14.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203992 s, 51.4 MB/s 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.325 256+0 records in 00:05:14.325 256+0 records out 00:05:14.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251501 s, 41.7 MB/s 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.325 17:46:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.326 17:46:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.326 17:46:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.585 17:46:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.844 17:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.103 17:46:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.103 17:46:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.363 17:46:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.742 [2024-10-25 17:46:34.817854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.742 [2024-10-25 17:46:34.916860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.742 [2024-10-25 17:46:34.916893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.742 [2024-10-25 17:46:35.097440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.742 [2024-10-25 17:46:35.097538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.816 spdk_app_start Round 1 00:05:18.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.816 17:46:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.816 17:46:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:18.816 17:46:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58283 ']' 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.816 17:46:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:18.816 17:46:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.816 Malloc0 00:05:18.816 17:46:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.075 Malloc1 00:05:19.075 17:46:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.075 17:46:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.334 /dev/nbd0 00:05:19.334 17:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.334 17:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.334 1+0 records in 00:05:19.334 1+0 records out 00:05:19.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347219 s, 11.8 MB/s 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:19.334 17:46:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:19.334 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.334 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.334 17:46:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.594 /dev/nbd1 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.594 1+0 records in 00:05:19.594 1+0 records out 00:05:19.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230122 s, 17.8 MB/s 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:19.594 17:46:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.594 17:46:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.854 { 00:05:19.854 "nbd_device": "/dev/nbd0", 00:05:19.854 "bdev_name": "Malloc0" 00:05:19.854 }, 00:05:19.854 { 00:05:19.854 "nbd_device": "/dev/nbd1", 00:05:19.854 "bdev_name": "Malloc1" 00:05:19.854 } 00:05:19.854 ]' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.854 { 00:05:19.854 "nbd_device": "/dev/nbd0", 00:05:19.854 "bdev_name": "Malloc0" 00:05:19.854 }, 00:05:19.854 { 00:05:19.854 "nbd_device": "/dev/nbd1", 00:05:19.854 "bdev_name": "Malloc1" 00:05:19.854 } 00:05:19.854 ]' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.854 /dev/nbd1' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.854 /dev/nbd1' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.854 256+0 records in 00:05:19.854 256+0 records out 00:05:19.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452222 s, 232 MB/s 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.854 256+0 records in 00:05:19.854 256+0 records out 00:05:19.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251422 s, 41.7 MB/s 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.854 256+0 records in 00:05:19.854 256+0 records out 00:05:19.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236806 s, 44.3 MB/s 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.854 17:46:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.114 17:46:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.373 17:46:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.633 17:46:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.633 17:46:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.203 17:46:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.158 [2024-10-25 17:46:40.477537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.158 [2024-10-25 17:46:40.586349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.158 [2024-10-25 17:46:40.586380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.417 [2024-10-25 17:46:40.772624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.417 [2024-10-25 17:46:40.772698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.321 spdk_app_start Round 2 00:05:24.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.321 17:46:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.321 17:46:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.321 17:46:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58283 ']' 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.321 17:46:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:24.321 17:46:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.580 Malloc0 00:05:24.580 17:46:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.839 Malloc1 00:05:24.839 17:46:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.839 17:46:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.098 /dev/nbd0 00:05:25.098 17:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.098 17:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.098 1+0 records in 00:05:25.098 1+0 records out 00:05:25.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411435 s, 10.0 MB/s 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.098 17:46:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.098 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.098 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.098 17:46:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.098 /dev/nbd1 00:05:25.357 17:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.357 17:46:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.357 17:46:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.358 1+0 records in 00:05:25.358 1+0 records out 00:05:25.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227107 s, 18.0 MB/s 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.358 17:46:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.358 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.358 17:46:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.358 17:46:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.358 17:46:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.358 17:46:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.617 { 00:05:25.617 "nbd_device": "/dev/nbd0", 00:05:25.617 "bdev_name": "Malloc0" 00:05:25.617 }, 00:05:25.617 { 00:05:25.617 "nbd_device": "/dev/nbd1", 00:05:25.617 "bdev_name": "Malloc1" 00:05:25.617 } 00:05:25.617 ]' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.617 { 00:05:25.617 "nbd_device": "/dev/nbd0", 00:05:25.617 "bdev_name": "Malloc0" 00:05:25.617 }, 00:05:25.617 { 00:05:25.617 "nbd_device": "/dev/nbd1", 00:05:25.617 "bdev_name": "Malloc1" 00:05:25.617 } 00:05:25.617 ]' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.617 /dev/nbd1' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.617 /dev/nbd1' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.617 256+0 records in 00:05:25.617 256+0 records out 00:05:25.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134582 s, 77.9 MB/s 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.617 256+0 records in 00:05:25.617 256+0 records out 00:05:25.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247884 s, 42.3 MB/s 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.617 256+0 records in 00:05:25.617 256+0 records out 00:05:25.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278652 s, 37.6 MB/s 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.617 17:46:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.617 17:46:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.876 17:46:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.136 17:46:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.395 17:46:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.395 17:46:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.964 17:46:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.901 [2024-10-25 17:46:46.198694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.901 [2024-10-25 17:46:46.308861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.901 [2024-10-25 17:46:46.308922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.160 [2024-10-25 17:46:46.495634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.160 [2024-10-25 17:46:46.495716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.066 17:46:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58283 ']' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.066 17:46:48 event.app_repeat -- event/event.sh@39 -- # killprocess 58283 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58283 ']' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58283 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58283 00:05:30.066 killing process with pid 58283 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58283' 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58283 00:05:30.066 17:46:48 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58283 00:05:31.003 spdk_app_start is called in Round 0. 00:05:31.003 Shutdown signal received, stop current app iteration 00:05:31.003 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:31.003 spdk_app_start is called in Round 1. 00:05:31.003 Shutdown signal received, stop current app iteration 00:05:31.003 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:31.003 spdk_app_start is called in Round 2. 00:05:31.003 Shutdown signal received, stop current app iteration 00:05:31.003 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:31.003 spdk_app_start is called in Round 3. 00:05:31.003 Shutdown signal received, stop current app iteration 00:05:31.003 17:46:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.003 17:46:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.003 00:05:31.003 real 0m19.003s 00:05:31.003 user 0m40.731s 00:05:31.003 sys 0m2.625s 00:05:31.003 17:46:49 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.003 17:46:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.003 ************************************ 00:05:31.003 END TEST app_repeat 00:05:31.003 ************************************ 00:05:31.003 17:46:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.003 17:46:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.003 17:46:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.003 17:46:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.003 17:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.003 ************************************ 00:05:31.003 START TEST cpu_locks 00:05:31.003 ************************************ 00:05:31.003 17:46:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.263 * Looking for test storage... 00:05:31.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.263 17:46:49 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:31.263 17:46:49 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:05:31.263 17:46:49 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:31.263 17:46:49 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.263 17:46:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.264 17:46:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:31.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.264 --rc genhtml_branch_coverage=1 00:05:31.264 --rc genhtml_function_coverage=1 00:05:31.264 --rc genhtml_legend=1 00:05:31.264 --rc geninfo_all_blocks=1 00:05:31.264 --rc geninfo_unexecuted_blocks=1 00:05:31.264 00:05:31.264 ' 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:31.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.264 --rc genhtml_branch_coverage=1 00:05:31.264 --rc genhtml_function_coverage=1 00:05:31.264 --rc genhtml_legend=1 00:05:31.264 --rc geninfo_all_blocks=1 00:05:31.264 --rc geninfo_unexecuted_blocks=1 00:05:31.264 00:05:31.264 ' 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:31.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.264 --rc genhtml_branch_coverage=1 00:05:31.264 --rc genhtml_function_coverage=1 00:05:31.264 --rc genhtml_legend=1 00:05:31.264 --rc geninfo_all_blocks=1 00:05:31.264 --rc geninfo_unexecuted_blocks=1 00:05:31.264 00:05:31.264 ' 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:31.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.264 --rc genhtml_branch_coverage=1 00:05:31.264 --rc genhtml_function_coverage=1 00:05:31.264 --rc genhtml_legend=1 00:05:31.264 --rc geninfo_all_blocks=1 00:05:31.264 --rc geninfo_unexecuted_blocks=1 00:05:31.264 00:05:31.264 ' 00:05:31.264 17:46:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.264 17:46:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.264 17:46:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.264 17:46:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.264 17:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.264 ************************************ 00:05:31.264 START TEST default_locks 00:05:31.264 ************************************ 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58730 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58730 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58730 ']' 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.264 17:46:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.524 [2024-10-25 17:46:49.750685] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:31.524 [2024-10-25 17:46:49.750809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58730 ] 00:05:31.524 [2024-10-25 17:46:49.930766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.783 [2024-10-25 17:46:50.041592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.721 17:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.721 17:46:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:32.721 17:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58730 00:05:32.721 17:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.721 17:46:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58730 ']' 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.981 killing process with pid 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58730' 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58730 00:05:32.981 17:46:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58730 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58730 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58730 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58730 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58730 ']' 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58730) - No such process 00:05:35.520 ERROR: process (pid: 58730) is no longer running 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.520 00:05:35.520 real 0m4.016s 00:05:35.520 user 0m3.940s 00:05:35.520 sys 0m0.676s 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.520 17:46:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.520 ************************************ 00:05:35.520 END TEST default_locks 00:05:35.520 ************************************ 00:05:35.520 17:46:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:35.520 17:46:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.520 17:46:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.520 17:46:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.520 ************************************ 00:05:35.520 START TEST default_locks_via_rpc 00:05:35.520 ************************************ 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58805 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58805 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58805 ']' 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.520 17:46:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.520 [2024-10-25 17:46:53.821883] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:35.520 [2024-10-25 17:46:53.822014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58805 ] 00:05:35.779 [2024-10-25 17:46:53.996120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.780 [2024-10-25 17:46:54.103145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58805 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58805 00:05:36.717 17:46:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58805 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58805 ']' 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58805 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58805 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.977 killing process with pid 58805 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58805' 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58805 00:05:36.977 17:46:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58805 00:05:39.561 00:05:39.561 real 0m3.761s 00:05:39.561 user 0m3.690s 00:05:39.561 sys 0m0.571s 00:05:39.561 17:46:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.561 17:46:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.561 ************************************ 00:05:39.561 END TEST default_locks_via_rpc 00:05:39.561 ************************************ 00:05:39.561 17:46:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.561 17:46:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.561 17:46:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.561 17:46:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.561 ************************************ 00:05:39.561 START TEST non_locking_app_on_locked_coremask 00:05:39.561 ************************************ 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58874 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58874 /var/tmp/spdk.sock 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58874 ']' 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.561 17:46:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.561 [2024-10-25 17:46:57.664452] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:39.561 [2024-10-25 17:46:57.664587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:05:39.561 [2024-10-25 17:46:57.844036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.561 [2024-10-25 17:46:57.949005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58894 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58894 /var/tmp/spdk2.sock 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58894 ']' 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.545 17:46:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.545 [2024-10-25 17:46:58.839753] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:40.545 [2024-10-25 17:46:58.839993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:05:40.805 [2024-10-25 17:46:59.010519] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.805 [2024-10-25 17:46:59.010591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.805 [2024-10-25 17:46:59.220385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58874 ']' 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58874' 00:05:43.341 killing process with pid 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58874 00:05:43.341 17:47:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58874 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58894 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58894 ']' 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58894 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58894 00:05:48.614 killing process with pid 58894 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58894' 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58894 00:05:48.614 17:47:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58894 00:05:49.994 00:05:49.994 real 0m10.825s 00:05:49.994 user 0m11.020s 00:05:49.994 sys 0m1.150s 00:05:49.994 17:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.994 ************************************ 00:05:49.994 END TEST non_locking_app_on_locked_coremask 00:05:49.994 ************************************ 00:05:49.994 17:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.253 17:47:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.253 17:47:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.253 17:47:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.253 17:47:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.253 ************************************ 00:05:50.253 START TEST locking_app_on_unlocked_coremask 00:05:50.253 ************************************ 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59034 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59034 /var/tmp/spdk.sock 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59034 ']' 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.253 17:47:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.254 [2024-10-25 17:47:08.545773] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:50.254 [2024-10-25 17:47:08.546018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59034 ] 00:05:50.512 [2024-10-25 17:47:08.720678] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.512 [2024-10-25 17:47:08.720733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.512 [2024-10-25 17:47:08.828271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59056 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59056 /var/tmp/spdk2.sock 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59056 ']' 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.450 17:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.450 [2024-10-25 17:47:09.763783] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:51.450 [2024-10-25 17:47:09.764470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:05:51.709 [2024-10-25 17:47:09.934196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.967 [2024-10-25 17:47:10.147805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59056 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59056 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59034 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59034 ']' 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59034 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59034 00:05:54.502 killing process with pid 59034 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59034' 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59034 00:05:54.502 17:47:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59034 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59056 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59056 ']' 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59056 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59056 00:05:59.797 killing process with pid 59056 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59056' 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59056 00:05:59.797 17:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59056 00:06:01.177 00:06:01.177 real 0m11.027s 00:06:01.177 user 0m11.314s 00:06:01.177 sys 0m1.242s 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.177 ************************************ 00:06:01.177 END TEST locking_app_on_unlocked_coremask 00:06:01.177 ************************************ 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.177 17:47:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.177 17:47:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.177 17:47:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.177 17:47:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.177 ************************************ 00:06:01.177 START TEST locking_app_on_locked_coremask 00:06:01.177 ************************************ 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59194 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59194 /var/tmp/spdk.sock 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59194 ']' 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.177 17:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.436 [2024-10-25 17:47:19.645349] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:01.437 [2024-10-25 17:47:19.645968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 00:06:01.437 [2024-10-25 17:47:19.815066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.696 [2024-10-25 17:47:19.917003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59212 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59212 /var/tmp/spdk2.sock 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:02.633 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59212 /var/tmp/spdk2.sock 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59212 /var/tmp/spdk2.sock 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59212 ']' 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.634 17:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.634 [2024-10-25 17:47:20.817059] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:02.634 [2024-10-25 17:47:20.817255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:06:02.634 [2024-10-25 17:47:20.985744] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59194 has claimed it. 00:06:02.634 [2024-10-25 17:47:20.985819] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.202 ERROR: process (pid: 59212) is no longer running 00:06:03.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59212) - No such process 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.202 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59194 00:06:03.203 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59194 00:06:03.203 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59194 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59194 ']' 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59194 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.462 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59194 00:06:03.722 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.722 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.722 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59194' 00:06:03.722 killing process with pid 59194 00:06:03.722 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59194 00:06:03.722 17:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59194 00:06:06.259 00:06:06.259 real 0m4.600s 00:06:06.259 user 0m4.726s 00:06:06.259 sys 0m0.818s 00:06:06.259 17:47:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.259 17:47:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 ************************************ 00:06:06.259 END TEST locking_app_on_locked_coremask 00:06:06.259 ************************************ 00:06:06.259 17:47:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.259 17:47:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.259 17:47:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.259 17:47:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 ************************************ 00:06:06.259 START TEST locking_overlapped_coremask 00:06:06.259 ************************************ 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59286 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59286 /var/tmp/spdk.sock 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59286 ']' 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.259 17:47:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 [2024-10-25 17:47:24.317753] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:06.259 [2024-10-25 17:47:24.318356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59286 ] 00:06:06.259 [2024-10-25 17:47:24.518822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.259 [2024-10-25 17:47:24.623145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.259 [2024-10-25 17:47:24.623326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.259 [2024-10-25 17:47:24.623340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59305 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59305 /var/tmp/spdk2.sock 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59305 /var/tmp/spdk2.sock 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59305 /var/tmp/spdk2.sock 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59305 ']' 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.198 17:47:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.198 [2024-10-25 17:47:25.547994] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:07.198 [2024-10-25 17:47:25.548202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59305 ] 00:06:07.458 [2024-10-25 17:47:25.715347] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59286 has claimed it. 00:06:07.458 [2024-10-25 17:47:25.715422] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.027 ERROR: process (pid: 59305) is no longer running 00:06:08.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59305) - No such process 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59286 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59286 ']' 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59286 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59286 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59286' 00:06:08.027 killing process with pid 59286 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59286 00:06:08.027 17:47:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59286 00:06:10.575 00:06:10.575 real 0m4.256s 00:06:10.575 user 0m11.481s 00:06:10.575 sys 0m0.610s 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.575 ************************************ 00:06:10.575 END TEST locking_overlapped_coremask 00:06:10.575 ************************************ 00:06:10.575 17:47:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.575 17:47:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.575 17:47:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.575 17:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.575 ************************************ 00:06:10.575 START TEST locking_overlapped_coremask_via_rpc 00:06:10.575 ************************************ 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59371 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59371 /var/tmp/spdk.sock 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59371 ']' 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.575 17:47:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.575 [2024-10-25 17:47:28.648501] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:10.575 [2024-10-25 17:47:28.648703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:06:10.575 [2024-10-25 17:47:28.819774] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.575 [2024-10-25 17:47:28.819820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.575 [2024-10-25 17:47:28.928558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.575 [2024-10-25 17:47:28.928654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.575 [2024-10-25 17:47:28.928673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59389 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.514 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59389 ']' 00:06:11.515 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.515 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.515 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.515 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.515 17:47:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.515 [2024-10-25 17:47:29.854089] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:11.515 [2024-10-25 17:47:29.854301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:06:11.774 [2024-10-25 17:47:30.020850] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.774 [2024-10-25 17:47:30.020905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.034 [2024-10-25 17:47:30.304308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.034 [2024-10-25 17:47:30.308049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.034 [2024-10-25 17:47:30.308082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 [2024-10-25 17:47:32.416066] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59371 has claimed it. 00:06:14.572 request: 00:06:14.572 { 00:06:14.572 "method": "framework_enable_cpumask_locks", 00:06:14.572 "req_id": 1 00:06:14.572 } 00:06:14.572 Got JSON-RPC error response 00:06:14.572 response: 00:06:14.572 { 00:06:14.572 "code": -32603, 00:06:14.572 "message": "Failed to claim CPU core: 2" 00:06:14.572 } 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59371 /var/tmp/spdk.sock 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59371 ']' 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59389 ']' 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.572 00:06:14.572 real 0m4.299s 00:06:14.572 user 0m1.217s 00:06:14.572 sys 0m0.214s 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.572 ************************************ 00:06:14.572 END TEST locking_overlapped_coremask_via_rpc 00:06:14.572 ************************************ 00:06:14.572 17:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 17:47:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.572 17:47:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59371 ]] 00:06:14.572 17:47:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59371 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59371 ']' 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59371 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59371 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.572 killing process with pid 59371 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59371' 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59371 00:06:14.572 17:47:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59371 00:06:17.121 17:47:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:06:17.121 17:47:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59389 ']' 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59389 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59389 00:06:17.121 killing process with pid 59389 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59389' 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59389 00:06:17.121 17:47:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59389 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.690 Process with pid 59371 is not found 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59371 ]] 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59371 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59371 ']' 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59371 00:06:19.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59371) - No such process 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59371 is not found' 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59389 ']' 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59389 00:06:19.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59389) - No such process 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59389 is not found' 00:06:19.690 Process with pid 59389 is not found 00:06:19.690 17:47:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.690 00:06:19.690 real 0m48.276s 00:06:19.690 user 1m22.419s 00:06:19.690 sys 0m6.658s 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.690 17:47:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.690 ************************************ 00:06:19.690 END TEST cpu_locks 00:06:19.690 ************************************ 00:06:19.690 ************************************ 00:06:19.690 END TEST event 00:06:19.690 ************************************ 00:06:19.690 00:06:19.690 real 1m19.595s 00:06:19.690 user 2m24.626s 00:06:19.690 sys 0m10.596s 00:06:19.690 17:47:37 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.690 17:47:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.690 17:47:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.690 17:47:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.690 17:47:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.690 17:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.690 ************************************ 00:06:19.690 START TEST thread 00:06:19.690 ************************************ 00:06:19.690 17:47:37 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.690 * Looking for test storage... 00:06:19.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:19.690 17:47:37 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:19.690 17:47:37 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:06:19.690 17:47:37 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:19.690 17:47:38 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:19.690 17:47:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.690 17:47:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.690 17:47:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.690 17:47:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.690 17:47:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.690 17:47:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.690 17:47:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.690 17:47:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.690 17:47:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.690 17:47:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.690 17:47:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.690 17:47:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:19.690 17:47:38 thread -- scripts/common.sh@345 -- # : 1 00:06:19.690 17:47:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.690 17:47:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.690 17:47:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:19.690 17:47:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:19.690 17:47:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.690 17:47:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:19.690 17:47:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.690 17:47:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:19.690 17:47:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:19.690 17:47:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.690 17:47:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:19.690 17:47:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.690 17:47:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.690 17:47:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.690 17:47:38 thread -- scripts/common.sh@368 -- # return 0 00:06:19.690 17:47:38 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.690 17:47:38 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:19.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.690 --rc genhtml_branch_coverage=1 00:06:19.690 --rc genhtml_function_coverage=1 00:06:19.690 --rc genhtml_legend=1 00:06:19.690 --rc geninfo_all_blocks=1 00:06:19.690 --rc geninfo_unexecuted_blocks=1 00:06:19.690 00:06:19.690 ' 00:06:19.690 17:47:38 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:19.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.690 --rc genhtml_branch_coverage=1 00:06:19.691 --rc genhtml_function_coverage=1 00:06:19.691 --rc genhtml_legend=1 00:06:19.691 --rc geninfo_all_blocks=1 00:06:19.691 --rc geninfo_unexecuted_blocks=1 00:06:19.691 00:06:19.691 ' 00:06:19.691 17:47:38 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.691 --rc genhtml_branch_coverage=1 00:06:19.691 --rc genhtml_function_coverage=1 00:06:19.691 --rc genhtml_legend=1 00:06:19.691 --rc geninfo_all_blocks=1 00:06:19.691 --rc geninfo_unexecuted_blocks=1 00:06:19.691 00:06:19.691 ' 00:06:19.691 17:47:38 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.691 --rc genhtml_branch_coverage=1 00:06:19.691 --rc genhtml_function_coverage=1 00:06:19.691 --rc genhtml_legend=1 00:06:19.691 --rc geninfo_all_blocks=1 00:06:19.691 --rc geninfo_unexecuted_blocks=1 00:06:19.691 00:06:19.691 ' 00:06:19.691 17:47:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.691 17:47:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:19.691 17:47:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.691 17:47:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 ************************************ 00:06:19.691 START TEST thread_poller_perf 00:06:19.691 ************************************ 00:06:19.691 17:47:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.691 [2024-10-25 17:47:38.084708] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:19.691 [2024-10-25 17:47:38.084808] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:06:19.951 [2024-10-25 17:47:38.258909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.951 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.951 [2024-10-25 17:47:38.365922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.331 [2024-10-25T17:47:39.767Z] ====================================== 00:06:21.331 [2024-10-25T17:47:39.767Z] busy:2300080262 (cyc) 00:06:21.331 [2024-10-25T17:47:39.767Z] total_run_count: 426000 00:06:21.331 [2024-10-25T17:47:39.767Z] tsc_hz: 2290000000 (cyc) 00:06:21.331 [2024-10-25T17:47:39.767Z] ====================================== 00:06:21.331 [2024-10-25T17:47:39.767Z] poller_cost: 5399 (cyc), 2357 (nsec) 00:06:21.331 00:06:21.331 real 0m1.547s 00:06:21.331 user 0m1.355s 00:06:21.331 sys 0m0.085s 00:06:21.331 ************************************ 00:06:21.331 END TEST thread_poller_perf 00:06:21.331 ************************************ 00:06:21.331 17:47:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.331 17:47:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.331 17:47:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.331 17:47:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:21.331 17:47:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.331 17:47:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.331 ************************************ 00:06:21.331 START TEST thread_poller_perf 00:06:21.331 ************************************ 00:06:21.331 17:47:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.331 [2024-10-25 17:47:39.709126] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:21.331 [2024-10-25 17:47:39.709330] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:06:21.590 [2024-10-25 17:47:39.886878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.590 [2024-10-25 17:47:39.993519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.590 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:22.984 [2024-10-25T17:47:41.420Z] ====================================== 00:06:22.984 [2024-10-25T17:47:41.420Z] busy:2293424656 (cyc) 00:06:22.984 [2024-10-25T17:47:41.420Z] total_run_count: 5491000 00:06:22.984 [2024-10-25T17:47:41.420Z] tsc_hz: 2290000000 (cyc) 00:06:22.984 [2024-10-25T17:47:41.420Z] ====================================== 00:06:22.984 [2024-10-25T17:47:41.420Z] poller_cost: 417 (cyc), 182 (nsec) 00:06:22.984 00:06:22.984 real 0m1.554s 00:06:22.984 user 0m1.336s 00:06:22.984 sys 0m0.110s 00:06:22.984 17:47:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.984 17:47:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.984 ************************************ 00:06:22.984 END TEST thread_poller_perf 00:06:22.984 ************************************ 00:06:22.984 17:47:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.984 00:06:22.984 real 0m3.464s 00:06:22.984 user 0m2.855s 00:06:22.984 sys 0m0.406s 00:06:22.984 ************************************ 00:06:22.984 END TEST thread 00:06:22.984 ************************************ 00:06:22.984 17:47:41 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.984 17:47:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.984 17:47:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:22.984 17:47:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.984 17:47:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.984 17:47:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.984 17:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:22.984 ************************************ 00:06:22.984 START TEST app_cmdline 00:06:22.984 ************************************ 00:06:22.984 17:47:41 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.243 * Looking for test storage... 00:06:23.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.243 17:47:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.243 --rc genhtml_branch_coverage=1 00:06:23.243 --rc genhtml_function_coverage=1 00:06:23.243 --rc genhtml_legend=1 00:06:23.243 --rc geninfo_all_blocks=1 00:06:23.243 --rc geninfo_unexecuted_blocks=1 00:06:23.243 00:06:23.243 ' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.243 --rc genhtml_branch_coverage=1 00:06:23.243 --rc genhtml_function_coverage=1 00:06:23.243 --rc genhtml_legend=1 00:06:23.243 --rc geninfo_all_blocks=1 00:06:23.243 --rc geninfo_unexecuted_blocks=1 00:06:23.243 00:06:23.243 ' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.243 --rc genhtml_branch_coverage=1 00:06:23.243 --rc genhtml_function_coverage=1 00:06:23.243 --rc genhtml_legend=1 00:06:23.243 --rc geninfo_all_blocks=1 00:06:23.243 --rc geninfo_unexecuted_blocks=1 00:06:23.243 00:06:23.243 ' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.243 --rc genhtml_branch_coverage=1 00:06:23.243 --rc genhtml_function_coverage=1 00:06:23.243 --rc genhtml_legend=1 00:06:23.243 --rc geninfo_all_blocks=1 00:06:23.243 --rc geninfo_unexecuted_blocks=1 00:06:23.243 00:06:23.243 ' 00:06:23.243 17:47:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:23.243 17:47:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59710 00:06:23.243 17:47:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:23.243 17:47:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59710 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59710 ']' 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.243 17:47:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.243 [2024-10-25 17:47:41.642426] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:23.243 [2024-10-25 17:47:41.642651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59710 ] 00:06:23.503 [2024-10-25 17:47:41.802890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.503 [2024-10-25 17:47:41.910211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.442 17:47:42 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.442 17:47:42 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:24.442 17:47:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:24.702 { 00:06:24.702 "version": "SPDK v25.01-pre git sha1 e83d2213a", 00:06:24.702 "fields": { 00:06:24.702 "major": 25, 00:06:24.702 "minor": 1, 00:06:24.702 "patch": 0, 00:06:24.702 "suffix": "-pre", 00:06:24.702 "commit": "e83d2213a" 00:06:24.702 } 00:06:24.702 } 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.702 17:47:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.702 17:47:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.702 17:47:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.702 17:47:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.702 17:47:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.702 17:47:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.702 17:47:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:24.702 17:47:43 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.962 request: 00:06:24.962 { 00:06:24.962 "method": "env_dpdk_get_mem_stats", 00:06:24.962 "req_id": 1 00:06:24.962 } 00:06:24.962 Got JSON-RPC error response 00:06:24.962 response: 00:06:24.962 { 00:06:24.962 "code": -32601, 00:06:24.962 "message": "Method not found" 00:06:24.962 } 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.962 17:47:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59710 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59710 ']' 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59710 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59710 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59710' 00:06:24.962 killing process with pid 59710 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@969 -- # kill 59710 00:06:24.962 17:47:43 app_cmdline -- common/autotest_common.sh@974 -- # wait 59710 00:06:27.501 00:06:27.501 real 0m4.216s 00:06:27.501 user 0m4.412s 00:06:27.501 sys 0m0.578s 00:06:27.501 17:47:45 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.501 17:47:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.501 ************************************ 00:06:27.501 END TEST app_cmdline 00:06:27.501 ************************************ 00:06:27.501 17:47:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:27.501 17:47:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.501 17:47:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.501 17:47:45 -- common/autotest_common.sh@10 -- # set +x 00:06:27.501 ************************************ 00:06:27.501 START TEST version 00:06:27.501 ************************************ 00:06:27.501 17:47:45 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:27.501 * Looking for test storage... 00:06:27.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.501 17:47:45 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:27.501 17:47:45 version -- common/autotest_common.sh@1689 -- # lcov --version 00:06:27.501 17:47:45 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:27.501 17:47:45 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:27.501 17:47:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.501 17:47:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.501 17:47:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.501 17:47:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.501 17:47:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.502 17:47:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.502 17:47:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.502 17:47:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.502 17:47:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.502 17:47:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.502 17:47:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.502 17:47:45 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.502 17:47:45 version -- scripts/common.sh@345 -- # : 1 00:06:27.502 17:47:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.502 17:47:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.502 17:47:45 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.502 17:47:45 version -- scripts/common.sh@353 -- # local d=1 00:06:27.502 17:47:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.502 17:47:45 version -- scripts/common.sh@355 -- # echo 1 00:06:27.502 17:47:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.502 17:47:45 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.502 17:47:45 version -- scripts/common.sh@353 -- # local d=2 00:06:27.502 17:47:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.502 17:47:45 version -- scripts/common.sh@355 -- # echo 2 00:06:27.502 17:47:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.502 17:47:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.502 17:47:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.502 17:47:45 version -- scripts/common.sh@368 -- # return 0 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.502 --rc genhtml_branch_coverage=1 00:06:27.502 --rc genhtml_function_coverage=1 00:06:27.502 --rc genhtml_legend=1 00:06:27.502 --rc geninfo_all_blocks=1 00:06:27.502 --rc geninfo_unexecuted_blocks=1 00:06:27.502 00:06:27.502 ' 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.502 --rc genhtml_branch_coverage=1 00:06:27.502 --rc genhtml_function_coverage=1 00:06:27.502 --rc genhtml_legend=1 00:06:27.502 --rc geninfo_all_blocks=1 00:06:27.502 --rc geninfo_unexecuted_blocks=1 00:06:27.502 00:06:27.502 ' 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.502 --rc genhtml_branch_coverage=1 00:06:27.502 --rc genhtml_function_coverage=1 00:06:27.502 --rc genhtml_legend=1 00:06:27.502 --rc geninfo_all_blocks=1 00:06:27.502 --rc geninfo_unexecuted_blocks=1 00:06:27.502 00:06:27.502 ' 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.502 --rc genhtml_branch_coverage=1 00:06:27.502 --rc genhtml_function_coverage=1 00:06:27.502 --rc genhtml_legend=1 00:06:27.502 --rc geninfo_all_blocks=1 00:06:27.502 --rc geninfo_unexecuted_blocks=1 00:06:27.502 00:06:27.502 ' 00:06:27.502 17:47:45 version -- app/version.sh@17 -- # get_header_version major 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # cut -f2 00:06:27.502 17:47:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.502 17:47:45 version -- app/version.sh@17 -- # major=25 00:06:27.502 17:47:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.502 17:47:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # cut -f2 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.502 17:47:45 version -- app/version.sh@18 -- # minor=1 00:06:27.502 17:47:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.502 17:47:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # cut -f2 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.502 17:47:45 version -- app/version.sh@19 -- # patch=0 00:06:27.502 17:47:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.502 17:47:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # cut -f2 00:06:27.502 17:47:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.502 17:47:45 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.502 17:47:45 version -- app/version.sh@22 -- # version=25.1 00:06:27.502 17:47:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.502 17:47:45 version -- app/version.sh@28 -- # version=25.1rc0 00:06:27.502 17:47:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.502 17:47:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:27.502 17:47:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:27.502 17:47:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:27.502 ************************************ 00:06:27.502 END TEST version 00:06:27.502 ************************************ 00:06:27.502 00:06:27.502 real 0m0.310s 00:06:27.502 user 0m0.165s 00:06:27.502 sys 0m0.201s 00:06:27.502 17:47:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.502 17:47:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:27.762 17:47:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:27.762 17:47:45 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:27.762 17:47:45 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:27.762 17:47:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.762 17:47:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.762 17:47:45 -- common/autotest_common.sh@10 -- # set +x 00:06:27.762 ************************************ 00:06:27.762 START TEST bdev_raid 00:06:27.762 ************************************ 00:06:27.762 17:47:45 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:27.762 * Looking for test storage... 00:06:27.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:27.762 17:47:46 bdev_raid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:27.762 17:47:46 bdev_raid -- common/autotest_common.sh@1689 -- # lcov --version 00:06:27.762 17:47:46 bdev_raid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:27.762 17:47:46 bdev_raid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:27.762 17:47:46 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.022 17:47:46 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:28.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.022 --rc genhtml_branch_coverage=1 00:06:28.022 --rc genhtml_function_coverage=1 00:06:28.022 --rc genhtml_legend=1 00:06:28.022 --rc geninfo_all_blocks=1 00:06:28.022 --rc geninfo_unexecuted_blocks=1 00:06:28.022 00:06:28.022 ' 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:28.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.022 --rc genhtml_branch_coverage=1 00:06:28.022 --rc genhtml_function_coverage=1 00:06:28.022 --rc genhtml_legend=1 00:06:28.022 --rc geninfo_all_blocks=1 00:06:28.022 --rc geninfo_unexecuted_blocks=1 00:06:28.022 00:06:28.022 ' 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:28.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.022 --rc genhtml_branch_coverage=1 00:06:28.022 --rc genhtml_function_coverage=1 00:06:28.022 --rc genhtml_legend=1 00:06:28.022 --rc geninfo_all_blocks=1 00:06:28.022 --rc geninfo_unexecuted_blocks=1 00:06:28.022 00:06:28.022 ' 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:28.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.022 --rc genhtml_branch_coverage=1 00:06:28.022 --rc genhtml_function_coverage=1 00:06:28.022 --rc genhtml_legend=1 00:06:28.022 --rc geninfo_all_blocks=1 00:06:28.022 --rc geninfo_unexecuted_blocks=1 00:06:28.022 00:06:28.022 ' 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.022 17:47:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:28.022 17:47:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.022 17:47:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 START TEST raid1_resize_data_offset_test 00:06:28.022 ************************************ 00:06:28.022 Process raid pid: 59892 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59892 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59892' 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59892 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 59892 ']' 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.022 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.023 17:47:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.023 [2024-10-25 17:47:46.326583] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:28.023 [2024-10-25 17:47:46.326773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.283 [2024-10-25 17:47:46.501500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.283 [2024-10-25 17:47:46.615354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.543 [2024-10-25 17:47:46.808554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.543 [2024-10-25 17:47:46.808679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.802 malloc0 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.802 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.062 malloc1 00:06:29.062 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.062 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:29.062 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.063 null0 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.063 [2024-10-25 17:47:47.310983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:29.063 [2024-10-25 17:47:47.312793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:29.063 [2024-10-25 17:47:47.312900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:29.063 [2024-10-25 17:47:47.313080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:29.063 [2024-10-25 17:47:47.313135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:29.063 [2024-10-25 17:47:47.313418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:29.063 [2024-10-25 17:47:47.313616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:29.063 [2024-10-25 17:47:47.313633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:29.063 [2024-10-25 17:47:47.313771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.063 [2024-10-25 17:47:47.374941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.063 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.632 malloc2 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.632 [2024-10-25 17:47:47.907855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:29.632 [2024-10-25 17:47:47.924668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.632 [2024-10-25 17:47:47.926474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.632 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59892 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 59892 ']' 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 59892 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.633 17:47:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59892 00:06:29.633 killing process with pid 59892 00:06:29.633 17:47:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.633 17:47:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.633 17:47:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59892' 00:06:29.633 17:47:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 59892 00:06:29.633 17:47:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 59892 00:06:29.633 [2024-10-25 17:47:48.021223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.633 [2024-10-25 17:47:48.021486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:29.633 [2024-10-25 17:47:48.021547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.633 [2024-10-25 17:47:48.021564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:29.633 [2024-10-25 17:47:48.055485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.633 [2024-10-25 17:47:48.055796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.633 [2024-10-25 17:47:48.055813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:31.540 [2024-10-25 17:47:49.724291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.488 17:47:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:32.488 00:06:32.488 real 0m4.531s 00:06:32.488 user 0m4.473s 00:06:32.488 sys 0m0.491s 00:06:32.488 17:47:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.488 17:47:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.488 ************************************ 00:06:32.488 END TEST raid1_resize_data_offset_test 00:06:32.488 ************************************ 00:06:32.488 17:47:50 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:32.488 17:47:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.488 17:47:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.488 17:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.488 ************************************ 00:06:32.488 START TEST raid0_resize_superblock_test 00:06:32.488 ************************************ 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59981 00:06:32.488 Process raid pid: 59981 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59981' 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59981 00:06:32.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59981 ']' 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.488 17:47:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.749 [2024-10-25 17:47:50.934721] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:32.749 [2024-10-25 17:47:50.934948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.749 [2024-10-25 17:47:51.105558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.008 [2024-10-25 17:47:51.206976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.008 [2024-10-25 17:47:51.399207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.008 [2024-10-25 17:47:51.399242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.577 17:47:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.577 17:47:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:33.577 17:47:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:33.577 17:47:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.577 17:47:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.844 malloc0 00:06:33.844 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.844 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:33.844 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.844 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.844 [2024-10-25 17:47:52.273085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:33.844 [2024-10-25 17:47:52.273241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.844 [2024-10-25 17:47:52.273270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:33.844 [2024-10-25 17:47:52.273282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.844 [2024-10-25 17:47:52.275335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.844 [2024-10-25 17:47:52.275374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.115 pt0 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 e5cede72-2412-4ae0-ba02-13c421ef96a2 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 160c773b-9b6d-4713-96c0-df13c304a252 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 2a2d50e0-ef2d-4983-bbc4-f8ab3978e850 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 [2024-10-25 17:47:52.407428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160c773b-9b6d-4713-96c0-df13c304a252 is claimed 00:06:34.115 [2024-10-25 17:47:52.407541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2a2d50e0-ef2d-4983-bbc4-f8ab3978e850 is claimed 00:06:34.115 [2024-10-25 17:47:52.407664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:34.115 [2024-10-25 17:47:52.407678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:34.115 [2024-10-25 17:47:52.407933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:34.115 [2024-10-25 17:47:52.408160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:34.115 [2024-10-25 17:47:52.408172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:34.115 [2024-10-25 17:47:52.408315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.115 [2024-10-25 17:47:52.503433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:34.115 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.116 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.116 [2024-10-25 17:47:52.547321] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.116 [2024-10-25 17:47:52.547346] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '160c773b-9b6d-4713-96c0-df13c304a252' was resized: old size 131072, new size 204800 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 [2024-10-25 17:47:52.559225] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.376 [2024-10-25 17:47:52.559248] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2a2d50e0-ef2d-4983-bbc4-f8ab3978e850' was resized: old size 131072, new size 204800 00:06:34.376 [2024-10-25 17:47:52.559276] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 [2024-10-25 17:47:52.675158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 [2024-10-25 17:47:52.702910] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:34.376 [2024-10-25 17:47:52.702975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:34.376 [2024-10-25 17:47:52.702986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.376 [2024-10-25 17:47:52.703002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:34.376 [2024-10-25 17:47:52.703084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.376 [2024-10-25 17:47:52.703113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.376 [2024-10-25 17:47:52.703124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 [2024-10-25 17:47:52.714848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:34.376 [2024-10-25 17:47:52.714898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.376 [2024-10-25 17:47:52.714916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:34.376 [2024-10-25 17:47:52.714926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.376 [2024-10-25 17:47:52.716966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.376 [2024-10-25 17:47:52.717048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.376 [2024-10-25 17:47:52.718587] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 160c773b-9b6d-4713-96c0-df13c304a252 00:06:34.376 [2024-10-25 17:47:52.718662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160c773b-9b6d-4713-96c0-df13c304a252 is claimed 00:06:34.376 [2024-10-25 17:47:52.718774] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2a2d50e0-ef2d-4983-bbc4-f8ab3978e850 00:06:34.376 [2024-10-25 17:47:52.718794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2a2d50e0-ef2d-4983-bbc4-f8ab3978e850 is claimed 00:06:34.376 [2024-10-25 17:47:52.718927] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2a2d50e0-ef2d-4983-bbc4-f8ab3978e850 (2) smaller than existing raid bdev Raid (3) 00:06:34.376 [2024-10-25 17:47:52.718949] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 160c773b-9b6d-4713-96c0-df13c304a252: File exists 00:06:34.376 [2024-10-25 17:47:52.718985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:34.376 [2024-10-25 17:47:52.718996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:34.376 [2024-10-25 17:47:52.719240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:34.376 [2024-10-25 17:47:52.719383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:34.376 [2024-10-25 17:47:52.719397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:34.376 [2024-10-25 17:47:52.719591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.376 pt0 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.376 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 [2024-10-25 17:47:52.743180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59981 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59981 ']' 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59981 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.377 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59981 00:06:34.636 killing process with pid 59981 00:06:34.636 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.636 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.636 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59981' 00:06:34.636 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59981 00:06:34.636 [2024-10-25 17:47:52.822511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.636 [2024-10-25 17:47:52.822566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.636 [2024-10-25 17:47:52.822602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.636 [2024-10-25 17:47:52.822611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:34.636 17:47:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59981 00:06:36.017 [2024-10-25 17:47:54.148439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.956 17:47:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:36.956 00:06:36.956 real 0m4.356s 00:06:36.956 user 0m4.553s 00:06:36.956 sys 0m0.549s 00:06:36.956 17:47:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.956 ************************************ 00:06:36.956 END TEST raid0_resize_superblock_test 00:06:36.956 ************************************ 00:06:36.956 17:47:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 17:47:55 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:36.956 17:47:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:36.956 17:47:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.956 17:47:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 ************************************ 00:06:36.956 START TEST raid1_resize_superblock_test 00:06:36.956 ************************************ 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60074 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60074' 00:06:36.956 Process raid pid: 60074 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60074 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60074 ']' 00:06:36.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.956 17:47:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 [2024-10-25 17:47:55.350021] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:36.956 [2024-10-25 17:47:55.350242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.216 [2024-10-25 17:47:55.520047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.217 [2024-10-25 17:47:55.629960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.476 [2024-10-25 17:47:55.829978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.476 [2024-10-25 17:47:55.830097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.737 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.737 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:37.737 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:37.737 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.737 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.307 malloc0 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.307 [2024-10-25 17:47:56.663452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.307 [2024-10-25 17:47:56.663544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.307 [2024-10-25 17:47:56.663569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:38.307 [2024-10-25 17:47:56.663580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.307 [2024-10-25 17:47:56.665616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.307 [2024-10-25 17:47:56.665656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.307 pt0 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.307 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 0382cd44-5a9f-4928-8fc7-aafc2d4f9920 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 ed5e94c3-0320-4ff9-a3ba-04f062142216 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 ebef0bdd-231b-4881-9e68-e99023570aae 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 [2024-10-25 17:47:56.795245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ed5e94c3-0320-4ff9-a3ba-04f062142216 is claimed 00:06:38.567 [2024-10-25 17:47:56.795414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ebef0bdd-231b-4881-9e68-e99023570aae is claimed 00:06:38.567 [2024-10-25 17:47:56.795561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:38.567 [2024-10-25 17:47:56.795576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:38.567 [2024-10-25 17:47:56.795810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:38.567 [2024-10-25 17:47:56.796037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:38.567 [2024-10-25 17:47:56.796050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:38.567 [2024-10-25 17:47:56.796185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.567 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.568 [2024-10-25 17:47:56.907257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.568 [2024-10-25 17:47:56.955109] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.568 [2024-10-25 17:47:56.955180] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ed5e94c3-0320-4ff9-a3ba-04f062142216' was resized: old size 131072, new size 204800 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.568 [2024-10-25 17:47:56.967055] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.568 [2024-10-25 17:47:56.967124] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ebef0bdd-231b-4881-9e68-e99023570aae' was resized: old size 131072, new size 204800 00:06:38.568 [2024-10-25 17:47:56.967181] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.568 17:47:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:38.828 [2024-10-25 17:47:57.078993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.828 [2024-10-25 17:47:57.126706] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:38.828 [2024-10-25 17:47:57.126777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:38.828 [2024-10-25 17:47:57.126803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:38.828 [2024-10-25 17:47:57.126944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.828 [2024-10-25 17:47:57.127109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.828 [2024-10-25 17:47:57.127171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.828 [2024-10-25 17:47:57.127182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.828 [2024-10-25 17:47:57.138638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.828 [2024-10-25 17:47:57.138694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.828 [2024-10-25 17:47:57.138714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:38.828 [2024-10-25 17:47:57.138727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.828 [2024-10-25 17:47:57.140795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.828 [2024-10-25 17:47:57.140853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.828 [2024-10-25 17:47:57.142384] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ed5e94c3-0320-4ff9-a3ba-04f062142216 00:06:38.828 [2024-10-25 17:47:57.142449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ed5e94c3-0320-4ff9-a3ba-04f062142216 is claimed 00:06:38.828 [2024-10-25 17:47:57.142548] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ebef0bdd-231b-4881-9e68-e99023570aae 00:06:38.828 [2024-10-25 17:47:57.142568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ebef0bdd-231b-4881-9e68-e99023570aae is claimed 00:06:38.828 [2024-10-25 17:47:57.142683] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ebef0bdd-231b-4881-9e68-e99023570aae (2) smaller than existing raid bdev Raid (3) 00:06:38.828 [2024-10-25 17:47:57.142702] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ed5e94c3-0320-4ff9-a3ba-04f062142216: File exists 00:06:38.828 [2024-10-25 17:47:57.142742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:38.828 [2024-10-25 17:47:57.142752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:38.828 [2024-10-25 17:47:57.143019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:38.828 [2024-10-25 17:47:57.143179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:38.828 [2024-10-25 17:47:57.143190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:38.828 [2024-10-25 17:47:57.143363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.828 pt0 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.828 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:38.829 [2024-10-25 17:47:57.162944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60074 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60074 ']' 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60074 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60074 00:06:38.829 killing process with pid 60074 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60074' 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60074 00:06:38.829 [2024-10-25 17:47:57.248510] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.829 [2024-10-25 17:47:57.248565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.829 [2024-10-25 17:47:57.248606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.829 [2024-10-25 17:47:57.248614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.829 17:47:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60074 00:06:40.214 [2024-10-25 17:47:58.590262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.597 ************************************ 00:06:41.598 END TEST raid1_resize_superblock_test 00:06:41.598 ************************************ 00:06:41.598 17:47:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:41.598 00:06:41.598 real 0m4.363s 00:06:41.598 user 0m4.567s 00:06:41.598 sys 0m0.564s 00:06:41.598 17:47:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.598 17:47:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:41.598 17:47:59 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:41.598 17:47:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:41.598 17:47:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.598 17:47:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 ************************************ 00:06:41.598 START TEST raid_function_test_raid0 00:06:41.598 ************************************ 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60171 00:06:41.598 Process raid pid: 60171 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60171' 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60171 00:06:41.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60171 ']' 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.598 17:47:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 [2024-10-25 17:47:59.812910] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:41.598 [2024-10-25 17:47:59.813028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.598 [2024-10-25 17:47:59.986941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.856 [2024-10-25 17:48:00.095611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.856 [2024-10-25 17:48:00.287990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.856 [2024-10-25 17:48:00.288027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.425 Base_1 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.425 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.425 Base_2 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.426 [2024-10-25 17:48:00.710834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:42.426 [2024-10-25 17:48:00.712641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:42.426 [2024-10-25 17:48:00.712708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.426 [2024-10-25 17:48:00.712720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:42.426 [2024-10-25 17:48:00.712976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.426 [2024-10-25 17:48:00.713115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.426 [2024-10-25 17:48:00.713124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:42.426 [2024-10-25 17:48:00.713256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:42.426 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:42.685 [2024-10-25 17:48:00.946459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:42.685 /dev/nbd0 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.685 17:48:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.685 1+0 records in 00:06:42.685 1+0 records out 00:06:42.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048014 s, 8.5 MB/s 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:42.685 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.945 { 00:06:42.945 "nbd_device": "/dev/nbd0", 00:06:42.945 "bdev_name": "raid" 00:06:42.945 } 00:06:42.945 ]' 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.945 { 00:06:42.945 "nbd_device": "/dev/nbd0", 00:06:42.945 "bdev_name": "raid" 00:06:42.945 } 00:06:42.945 ]' 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:42.945 4096+0 records in 00:06:42.945 4096+0 records out 00:06:42.945 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0358528 s, 58.5 MB/s 00:06:42.945 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:43.205 4096+0 records in 00:06:43.205 4096+0 records out 00:06:43.205 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.22876 s, 9.2 MB/s 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:43.205 128+0 records in 00:06:43.205 128+0 records out 00:06:43.205 65536 bytes (66 kB, 64 KiB) copied, 0.00118563 s, 55.3 MB/s 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:43.205 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:43.465 2035+0 records in 00:06:43.465 2035+0 records out 00:06:43.465 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0157968 s, 66.0 MB/s 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:43.465 456+0 records in 00:06:43.465 456+0 records out 00:06:43.465 233472 bytes (233 kB, 228 KiB) copied, 0.00364738 s, 64.0 MB/s 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.465 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.724 [2024-10-25 17:48:01.904939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.724 17:48:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.724 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60171 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60171 ']' 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60171 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60171 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.983 killing process with pid 60171 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60171' 00:06:43.983 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60171 00:06:43.983 [2024-10-25 17:48:02.206436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.983 [2024-10-25 17:48:02.206540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.983 [2024-10-25 17:48:02.206589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.983 [2024-10-25 17:48:02.206609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state off 17:48:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60171 00:06:43.983 line 00:06:43.983 [2024-10-25 17:48:02.397909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.359 17:48:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:45.360 00:06:45.360 real 0m3.692s 00:06:45.360 user 0m4.236s 00:06:45.360 sys 0m0.980s 00:06:45.360 17:48:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.360 17:48:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 ************************************ 00:06:45.360 END TEST raid_function_test_raid0 00:06:45.360 ************************************ 00:06:45.360 17:48:03 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:45.360 17:48:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.360 17:48:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.360 17:48:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 ************************************ 00:06:45.360 START TEST raid_function_test_concat 00:06:45.360 ************************************ 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:45.360 Process raid pid: 60295 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60295 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60295' 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60295 00:06:45.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60295 ']' 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.360 17:48:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 [2024-10-25 17:48:03.573843] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:45.360 [2024-10-25 17:48:03.573968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.360 [2024-10-25 17:48:03.726366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.618 [2024-10-25 17:48:03.831935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.618 [2024-10-25 17:48:04.012064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.619 [2024-10-25 17:48:04.012100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 Base_1 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 Base_2 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 [2024-10-25 17:48:04.482360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.187 [2024-10-25 17:48:04.484070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.187 [2024-10-25 17:48:04.484158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.187 [2024-10-25 17:48:04.484171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.187 [2024-10-25 17:48:04.484405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.187 [2024-10-25 17:48:04.484551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.187 [2024-10-25 17:48:04.484560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.187 [2024-10-25 17:48:04.484699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.187 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:46.446 [2024-10-25 17:48:04.730929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:46.446 /dev/nbd0 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.446 1+0 records in 00:06:46.446 1+0 records out 00:06:46.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547679 s, 7.5 MB/s 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.446 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.706 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.706 { 00:06:46.706 "nbd_device": "/dev/nbd0", 00:06:46.706 "bdev_name": "raid" 00:06:46.706 } 00:06:46.706 ]' 00:06:46.706 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.706 { 00:06:46.706 "nbd_device": "/dev/nbd0", 00:06:46.706 "bdev_name": "raid" 00:06:46.706 } 00:06:46.706 ]' 00:06:46.706 17:48:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:46.706 4096+0 records in 00:06:46.706 4096+0 records out 00:06:46.706 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0267074 s, 78.5 MB/s 00:06:46.706 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:46.965 4096+0 records in 00:06:46.965 4096+0 records out 00:06:46.965 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.187197 s, 11.2 MB/s 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:46.965 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:46.965 128+0 records in 00:06:46.965 128+0 records out 00:06:46.965 65536 bytes (66 kB, 64 KiB) copied, 0.00125048 s, 52.4 MB/s 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:46.966 2035+0 records in 00:06:46.966 2035+0 records out 00:06:46.966 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0134458 s, 77.5 MB/s 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:46.966 456+0 records in 00:06:46.966 456+0 records out 00:06:46.966 233472 bytes (233 kB, 228 KiB) copied, 0.00349649 s, 66.8 MB/s 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.966 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.224 [2024-10-25 17:48:05.612675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.224 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60295 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60295 ']' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60295 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60295 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.484 killing process with pid 60295 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60295' 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60295 00:06:47.484 [2024-10-25 17:48:05.896220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.484 [2024-10-25 17:48:05.896324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.484 [2024-10-25 17:48:05.896378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.484 [2024-10-25 17:48:05.896390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:47.484 17:48:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60295 00:06:47.743 [2024-10-25 17:48:06.086187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.141 17:48:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.141 00:06:49.141 real 0m3.642s 00:06:49.141 user 0m4.198s 00:06:49.141 sys 0m0.937s 00:06:49.141 17:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.141 17:48:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 ************************************ 00:06:49.141 END TEST raid_function_test_concat 00:06:49.141 ************************************ 00:06:49.141 17:48:07 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:49.141 17:48:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:49.141 17:48:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.141 17:48:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 ************************************ 00:06:49.141 START TEST raid0_resize_test 00:06:49.141 ************************************ 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:49.141 Process raid pid: 60416 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60416 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60416' 00:06:49.141 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60416 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60416 ']' 00:06:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.142 17:48:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 [2024-10-25 17:48:07.291238] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:49.142 [2024-10-25 17:48:07.291368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.142 [2024-10-25 17:48:07.463478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.142 [2024-10-25 17:48:07.568414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.410 [2024-10-25 17:48:07.763804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.410 [2024-10-25 17:48:07.763847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.669 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.669 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:49.669 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:49.669 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.669 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.929 Base_1 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.929 Base_2 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.929 [2024-10-25 17:48:08.123367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.929 [2024-10-25 17:48:08.125104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.929 [2024-10-25 17:48:08.125159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.929 [2024-10-25 17:48:08.125170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.929 [2024-10-25 17:48:08.125406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.929 [2024-10-25 17:48:08.125523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.929 [2024-10-25 17:48:08.125532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.929 [2024-10-25 17:48:08.125664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.929 [2024-10-25 17:48:08.135321] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.929 [2024-10-25 17:48:08.135394] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:49.929 true 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.929 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.930 [2024-10-25 17:48:08.151470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.930 [2024-10-25 17:48:08.199209] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.930 [2024-10-25 17:48:08.199278] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:49.930 [2024-10-25 17:48:08.199309] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:49.930 true 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.930 [2024-10-25 17:48:08.215345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60416 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60416 ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60416 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60416 00:06:49.930 killing process with pid 60416 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60416' 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60416 00:06:49.930 [2024-10-25 17:48:08.283799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.930 [2024-10-25 17:48:08.283877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.930 [2024-10-25 17:48:08.283918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.930 [2024-10-25 17:48:08.283927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.930 17:48:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60416 00:06:49.930 [2024-10-25 17:48:08.300746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.309 ************************************ 00:06:51.309 END TEST raid0_resize_test 00:06:51.309 ************************************ 00:06:51.309 17:48:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:51.309 00:06:51.309 real 0m2.133s 00:06:51.309 user 0m2.246s 00:06:51.309 sys 0m0.335s 00:06:51.309 17:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.309 17:48:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.309 17:48:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:51.309 17:48:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.309 17:48:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.309 17:48:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.309 ************************************ 00:06:51.309 START TEST raid1_resize_test 00:06:51.309 ************************************ 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:51.309 Process raid pid: 60478 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60478 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60478' 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60478 00:06:51.309 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60478 ']' 00:06:51.310 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.310 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.310 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.310 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.310 17:48:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.310 [2024-10-25 17:48:09.494017] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:51.310 [2024-10-25 17:48:09.494185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.310 [2024-10-25 17:48:09.666212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.569 [2024-10-25 17:48:09.773063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.569 [2024-10-25 17:48:09.958332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.569 [2024-10-25 17:48:09.958415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 Base_1 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 Base_2 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.137 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 [2024-10-25 17:48:10.332964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.138 [2024-10-25 17:48:10.334627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.138 [2024-10-25 17:48:10.334686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.138 [2024-10-25 17:48:10.334697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:52.138 [2024-10-25 17:48:10.334937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:52.138 [2024-10-25 17:48:10.335064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.138 [2024-10-25 17:48:10.335073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:52.138 [2024-10-25 17:48:10.335208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 [2024-10-25 17:48:10.344940] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.138 [2024-10-25 17:48:10.344970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:52.138 true 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:52.138 [2024-10-25 17:48:10.357060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 [2024-10-25 17:48:10.404793] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.138 [2024-10-25 17:48:10.404876] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:52.138 [2024-10-25 17:48:10.404907] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:52.138 true 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:52.138 [2024-10-25 17:48:10.416958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60478 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60478 ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60478 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60478 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60478' 00:06:52.138 killing process with pid 60478 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60478 00:06:52.138 [2024-10-25 17:48:10.474250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.138 [2024-10-25 17:48:10.474372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.138 17:48:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60478 00:06:52.138 [2024-10-25 17:48:10.474824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.138 [2024-10-25 17:48:10.474909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.138 [2024-10-25 17:48:10.490333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.519 17:48:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:53.519 00:06:53.519 real 0m2.106s 00:06:53.519 user 0m2.213s 00:06:53.519 sys 0m0.318s 00:06:53.519 17:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.519 17:48:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.519 ************************************ 00:06:53.519 END TEST raid1_resize_test 00:06:53.519 ************************************ 00:06:53.519 17:48:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:53.519 17:48:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:53.519 17:48:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:53.519 17:48:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:53.519 17:48:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.519 17:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.519 ************************************ 00:06:53.519 START TEST raid_state_function_test 00:06:53.519 ************************************ 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60535 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60535' 00:06:53.519 Process raid pid: 60535 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60535 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60535 ']' 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.519 17:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.519 [2024-10-25 17:48:11.690264] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:53.519 [2024-10-25 17:48:11.690475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.519 [2024-10-25 17:48:11.867400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.779 [2024-10-25 17:48:11.975641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.779 [2024-10-25 17:48:12.166798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.779 [2024-10-25 17:48:12.166892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.350 [2024-10-25 17:48:12.511017] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.350 [2024-10-25 17:48:12.511076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.350 [2024-10-25 17:48:12.511087] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.350 [2024-10-25 17:48:12.511096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.350 "name": "Existed_Raid", 00:06:54.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.350 "strip_size_kb": 64, 00:06:54.350 "state": "configuring", 00:06:54.350 "raid_level": "raid0", 00:06:54.350 "superblock": false, 00:06:54.350 "num_base_bdevs": 2, 00:06:54.350 "num_base_bdevs_discovered": 0, 00:06:54.350 "num_base_bdevs_operational": 2, 00:06:54.350 "base_bdevs_list": [ 00:06:54.350 { 00:06:54.350 "name": "BaseBdev1", 00:06:54.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.350 "is_configured": false, 00:06:54.350 "data_offset": 0, 00:06:54.350 "data_size": 0 00:06:54.350 }, 00:06:54.350 { 00:06:54.350 "name": "BaseBdev2", 00:06:54.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.350 "is_configured": false, 00:06:54.350 "data_offset": 0, 00:06:54.350 "data_size": 0 00:06:54.350 } 00:06:54.350 ] 00:06:54.350 }' 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.350 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.611 [2024-10-25 17:48:12.942206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:54.611 [2024-10-25 17:48:12.942292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.611 [2024-10-25 17:48:12.954188] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.611 [2024-10-25 17:48:12.954270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.611 [2024-10-25 17:48:12.954296] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.611 [2024-10-25 17:48:12.954321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.611 [2024-10-25 17:48:12.995907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:54.611 BaseBdev1 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.611 17:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.611 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.612 [ 00:06:54.612 { 00:06:54.612 "name": "BaseBdev1", 00:06:54.612 "aliases": [ 00:06:54.612 "a4784565-1629-48f4-be28-d8724bbb9d12" 00:06:54.612 ], 00:06:54.612 "product_name": "Malloc disk", 00:06:54.612 "block_size": 512, 00:06:54.612 "num_blocks": 65536, 00:06:54.612 "uuid": "a4784565-1629-48f4-be28-d8724bbb9d12", 00:06:54.612 "assigned_rate_limits": { 00:06:54.612 "rw_ios_per_sec": 0, 00:06:54.612 "rw_mbytes_per_sec": 0, 00:06:54.612 "r_mbytes_per_sec": 0, 00:06:54.612 "w_mbytes_per_sec": 0 00:06:54.612 }, 00:06:54.612 "claimed": true, 00:06:54.612 "claim_type": "exclusive_write", 00:06:54.612 "zoned": false, 00:06:54.612 "supported_io_types": { 00:06:54.612 "read": true, 00:06:54.612 "write": true, 00:06:54.612 "unmap": true, 00:06:54.612 "flush": true, 00:06:54.612 "reset": true, 00:06:54.612 "nvme_admin": false, 00:06:54.612 "nvme_io": false, 00:06:54.612 "nvme_io_md": false, 00:06:54.612 "write_zeroes": true, 00:06:54.612 "zcopy": true, 00:06:54.612 "get_zone_info": false, 00:06:54.612 "zone_management": false, 00:06:54.612 "zone_append": false, 00:06:54.612 "compare": false, 00:06:54.612 "compare_and_write": false, 00:06:54.612 "abort": true, 00:06:54.612 "seek_hole": false, 00:06:54.612 "seek_data": false, 00:06:54.612 "copy": true, 00:06:54.612 "nvme_iov_md": false 00:06:54.612 }, 00:06:54.612 "memory_domains": [ 00:06:54.612 { 00:06:54.612 "dma_device_id": "system", 00:06:54.612 "dma_device_type": 1 00:06:54.612 }, 00:06:54.612 { 00:06:54.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.612 "dma_device_type": 2 00:06:54.612 } 00:06:54.612 ], 00:06:54.612 "driver_specific": {} 00:06:54.612 } 00:06:54.612 ] 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.612 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.872 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.872 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.872 "name": "Existed_Raid", 00:06:54.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.872 "strip_size_kb": 64, 00:06:54.872 "state": "configuring", 00:06:54.872 "raid_level": "raid0", 00:06:54.872 "superblock": false, 00:06:54.872 "num_base_bdevs": 2, 00:06:54.872 "num_base_bdevs_discovered": 1, 00:06:54.872 "num_base_bdevs_operational": 2, 00:06:54.872 "base_bdevs_list": [ 00:06:54.872 { 00:06:54.872 "name": "BaseBdev1", 00:06:54.872 "uuid": "a4784565-1629-48f4-be28-d8724bbb9d12", 00:06:54.872 "is_configured": true, 00:06:54.872 "data_offset": 0, 00:06:54.872 "data_size": 65536 00:06:54.872 }, 00:06:54.872 { 00:06:54.872 "name": "BaseBdev2", 00:06:54.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.872 "is_configured": false, 00:06:54.872 "data_offset": 0, 00:06:54.872 "data_size": 0 00:06:54.872 } 00:06:54.872 ] 00:06:54.872 }' 00:06:54.872 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.872 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 [2024-10-25 17:48:13.467103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.132 [2024-10-25 17:48:13.467151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 [2024-10-25 17:48:13.479129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.132 [2024-10-25 17:48:13.480861] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.132 [2024-10-25 17:48:13.480905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.132 "name": "Existed_Raid", 00:06:55.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.132 "strip_size_kb": 64, 00:06:55.132 "state": "configuring", 00:06:55.132 "raid_level": "raid0", 00:06:55.132 "superblock": false, 00:06:55.132 "num_base_bdevs": 2, 00:06:55.132 "num_base_bdevs_discovered": 1, 00:06:55.132 "num_base_bdevs_operational": 2, 00:06:55.132 "base_bdevs_list": [ 00:06:55.132 { 00:06:55.132 "name": "BaseBdev1", 00:06:55.132 "uuid": "a4784565-1629-48f4-be28-d8724bbb9d12", 00:06:55.132 "is_configured": true, 00:06:55.132 "data_offset": 0, 00:06:55.132 "data_size": 65536 00:06:55.132 }, 00:06:55.132 { 00:06:55.132 "name": "BaseBdev2", 00:06:55.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.132 "is_configured": false, 00:06:55.132 "data_offset": 0, 00:06:55.132 "data_size": 0 00:06:55.132 } 00:06:55.132 ] 00:06:55.132 }' 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.132 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.729 [2024-10-25 17:48:13.931595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.729 [2024-10-25 17:48:13.931710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:55.729 [2024-10-25 17:48:13.931735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:55.729 [2024-10-25 17:48:13.932045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.729 [2024-10-25 17:48:13.932238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:55.729 [2024-10-25 17:48:13.932284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:55.729 [2024-10-25 17:48:13.932583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.729 BaseBdev2 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:55.729 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.730 [ 00:06:55.730 { 00:06:55.730 "name": "BaseBdev2", 00:06:55.730 "aliases": [ 00:06:55.730 "e894d70f-ce9b-43cc-87c0-57fe3a643364" 00:06:55.730 ], 00:06:55.730 "product_name": "Malloc disk", 00:06:55.730 "block_size": 512, 00:06:55.730 "num_blocks": 65536, 00:06:55.730 "uuid": "e894d70f-ce9b-43cc-87c0-57fe3a643364", 00:06:55.730 "assigned_rate_limits": { 00:06:55.730 "rw_ios_per_sec": 0, 00:06:55.730 "rw_mbytes_per_sec": 0, 00:06:55.730 "r_mbytes_per_sec": 0, 00:06:55.730 "w_mbytes_per_sec": 0 00:06:55.730 }, 00:06:55.730 "claimed": true, 00:06:55.730 "claim_type": "exclusive_write", 00:06:55.730 "zoned": false, 00:06:55.730 "supported_io_types": { 00:06:55.730 "read": true, 00:06:55.730 "write": true, 00:06:55.730 "unmap": true, 00:06:55.730 "flush": true, 00:06:55.730 "reset": true, 00:06:55.730 "nvme_admin": false, 00:06:55.730 "nvme_io": false, 00:06:55.730 "nvme_io_md": false, 00:06:55.730 "write_zeroes": true, 00:06:55.730 "zcopy": true, 00:06:55.730 "get_zone_info": false, 00:06:55.730 "zone_management": false, 00:06:55.730 "zone_append": false, 00:06:55.730 "compare": false, 00:06:55.730 "compare_and_write": false, 00:06:55.730 "abort": true, 00:06:55.730 "seek_hole": false, 00:06:55.730 "seek_data": false, 00:06:55.730 "copy": true, 00:06:55.730 "nvme_iov_md": false 00:06:55.730 }, 00:06:55.730 "memory_domains": [ 00:06:55.730 { 00:06:55.730 "dma_device_id": "system", 00:06:55.730 "dma_device_type": 1 00:06:55.730 }, 00:06:55.730 { 00:06:55.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.730 "dma_device_type": 2 00:06:55.730 } 00:06:55.730 ], 00:06:55.730 "driver_specific": {} 00:06:55.730 } 00:06:55.730 ] 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.730 17:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.730 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.730 "name": "Existed_Raid", 00:06:55.730 "uuid": "3415c395-bb10-432a-9458-30e491798717", 00:06:55.730 "strip_size_kb": 64, 00:06:55.730 "state": "online", 00:06:55.730 "raid_level": "raid0", 00:06:55.730 "superblock": false, 00:06:55.730 "num_base_bdevs": 2, 00:06:55.730 "num_base_bdevs_discovered": 2, 00:06:55.730 "num_base_bdevs_operational": 2, 00:06:55.730 "base_bdevs_list": [ 00:06:55.730 { 00:06:55.730 "name": "BaseBdev1", 00:06:55.730 "uuid": "a4784565-1629-48f4-be28-d8724bbb9d12", 00:06:55.730 "is_configured": true, 00:06:55.730 "data_offset": 0, 00:06:55.730 "data_size": 65536 00:06:55.730 }, 00:06:55.730 { 00:06:55.730 "name": "BaseBdev2", 00:06:55.730 "uuid": "e894d70f-ce9b-43cc-87c0-57fe3a643364", 00:06:55.730 "is_configured": true, 00:06:55.730 "data_offset": 0, 00:06:55.730 "data_size": 65536 00:06:55.730 } 00:06:55.730 ] 00:06:55.730 }' 00:06:55.730 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.730 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.989 [2024-10-25 17:48:14.391137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.989 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:55.989 "name": "Existed_Raid", 00:06:55.989 "aliases": [ 00:06:55.989 "3415c395-bb10-432a-9458-30e491798717" 00:06:55.989 ], 00:06:55.989 "product_name": "Raid Volume", 00:06:55.989 "block_size": 512, 00:06:55.989 "num_blocks": 131072, 00:06:55.989 "uuid": "3415c395-bb10-432a-9458-30e491798717", 00:06:55.989 "assigned_rate_limits": { 00:06:55.989 "rw_ios_per_sec": 0, 00:06:55.989 "rw_mbytes_per_sec": 0, 00:06:55.989 "r_mbytes_per_sec": 0, 00:06:55.989 "w_mbytes_per_sec": 0 00:06:55.989 }, 00:06:55.989 "claimed": false, 00:06:55.989 "zoned": false, 00:06:55.989 "supported_io_types": { 00:06:55.989 "read": true, 00:06:55.989 "write": true, 00:06:55.989 "unmap": true, 00:06:55.989 "flush": true, 00:06:55.989 "reset": true, 00:06:55.989 "nvme_admin": false, 00:06:55.989 "nvme_io": false, 00:06:55.989 "nvme_io_md": false, 00:06:55.989 "write_zeroes": true, 00:06:55.989 "zcopy": false, 00:06:55.989 "get_zone_info": false, 00:06:55.989 "zone_management": false, 00:06:55.989 "zone_append": false, 00:06:55.989 "compare": false, 00:06:55.989 "compare_and_write": false, 00:06:55.989 "abort": false, 00:06:55.989 "seek_hole": false, 00:06:55.989 "seek_data": false, 00:06:55.989 "copy": false, 00:06:55.989 "nvme_iov_md": false 00:06:55.989 }, 00:06:55.989 "memory_domains": [ 00:06:55.989 { 00:06:55.989 "dma_device_id": "system", 00:06:55.989 "dma_device_type": 1 00:06:55.989 }, 00:06:55.989 { 00:06:55.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.989 "dma_device_type": 2 00:06:55.989 }, 00:06:55.989 { 00:06:55.989 "dma_device_id": "system", 00:06:55.989 "dma_device_type": 1 00:06:55.989 }, 00:06:55.989 { 00:06:55.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.989 "dma_device_type": 2 00:06:55.989 } 00:06:55.989 ], 00:06:55.989 "driver_specific": { 00:06:55.989 "raid": { 00:06:55.989 "uuid": "3415c395-bb10-432a-9458-30e491798717", 00:06:55.989 "strip_size_kb": 64, 00:06:55.989 "state": "online", 00:06:55.989 "raid_level": "raid0", 00:06:55.989 "superblock": false, 00:06:55.989 "num_base_bdevs": 2, 00:06:55.989 "num_base_bdevs_discovered": 2, 00:06:55.989 "num_base_bdevs_operational": 2, 00:06:55.989 "base_bdevs_list": [ 00:06:55.989 { 00:06:55.989 "name": "BaseBdev1", 00:06:55.990 "uuid": "a4784565-1629-48f4-be28-d8724bbb9d12", 00:06:55.990 "is_configured": true, 00:06:55.990 "data_offset": 0, 00:06:55.990 "data_size": 65536 00:06:55.990 }, 00:06:55.990 { 00:06:55.990 "name": "BaseBdev2", 00:06:55.990 "uuid": "e894d70f-ce9b-43cc-87c0-57fe3a643364", 00:06:55.990 "is_configured": true, 00:06:55.990 "data_offset": 0, 00:06:55.990 "data_size": 65536 00:06:55.990 } 00:06:55.990 ] 00:06:55.990 } 00:06:55.990 } 00:06:55.990 }' 00:06:55.990 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.249 BaseBdev2' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.249 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.249 [2024-10-25 17:48:14.594530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.249 [2024-10-25 17:48:14.594565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.249 [2024-10-25 17:48:14.594612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.507 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.507 "name": "Existed_Raid", 00:06:56.507 "uuid": "3415c395-bb10-432a-9458-30e491798717", 00:06:56.507 "strip_size_kb": 64, 00:06:56.507 "state": "offline", 00:06:56.507 "raid_level": "raid0", 00:06:56.507 "superblock": false, 00:06:56.507 "num_base_bdevs": 2, 00:06:56.507 "num_base_bdevs_discovered": 1, 00:06:56.507 "num_base_bdevs_operational": 1, 00:06:56.507 "base_bdevs_list": [ 00:06:56.507 { 00:06:56.507 "name": null, 00:06:56.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.507 "is_configured": false, 00:06:56.507 "data_offset": 0, 00:06:56.507 "data_size": 65536 00:06:56.507 }, 00:06:56.507 { 00:06:56.507 "name": "BaseBdev2", 00:06:56.507 "uuid": "e894d70f-ce9b-43cc-87c0-57fe3a643364", 00:06:56.508 "is_configured": true, 00:06:56.508 "data_offset": 0, 00:06:56.508 "data_size": 65536 00:06:56.508 } 00:06:56.508 ] 00:06:56.508 }' 00:06:56.508 17:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.508 17:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.766 [2024-10-25 17:48:15.109927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:56.766 [2024-10-25 17:48:15.110029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:56.766 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60535 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60535 ']' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60535 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60535 00:06:57.026 killing process with pid 60535 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60535' 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60535 00:06:57.026 [2024-10-25 17:48:15.274930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.026 17:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60535 00:06:57.026 [2024-10-25 17:48:15.290869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.962 ************************************ 00:06:57.962 END TEST raid_state_function_test 00:06:57.962 ************************************ 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:57.962 00:06:57.962 real 0m4.729s 00:06:57.962 user 0m6.767s 00:06:57.962 sys 0m0.796s 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.962 17:48:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:57.962 17:48:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:57.962 17:48:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.962 17:48:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.962 ************************************ 00:06:57.962 START TEST raid_state_function_test_sb 00:06:57.962 ************************************ 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:57.962 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60781 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60781' 00:06:58.222 Process raid pid: 60781 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60781 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60781 ']' 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.222 17:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.222 [2024-10-25 17:48:16.483209] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:58.222 [2024-10-25 17:48:16.483391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.481 [2024-10-25 17:48:16.658139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.481 [2024-10-25 17:48:16.761261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.741 [2024-10-25 17:48:16.941620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.741 [2024-10-25 17:48:16.941662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.001 [2024-10-25 17:48:17.312089] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.001 [2024-10-25 17:48:17.312143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.001 [2024-10-25 17:48:17.312152] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.001 [2024-10-25 17:48:17.312161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.001 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.002 "name": "Existed_Raid", 00:06:59.002 "uuid": "cbd61d3b-16fc-4a9f-9273-aa3df2b4952d", 00:06:59.002 "strip_size_kb": 64, 00:06:59.002 "state": "configuring", 00:06:59.002 "raid_level": "raid0", 00:06:59.002 "superblock": true, 00:06:59.002 "num_base_bdevs": 2, 00:06:59.002 "num_base_bdevs_discovered": 0, 00:06:59.002 "num_base_bdevs_operational": 2, 00:06:59.002 "base_bdevs_list": [ 00:06:59.002 { 00:06:59.002 "name": "BaseBdev1", 00:06:59.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.002 "is_configured": false, 00:06:59.002 "data_offset": 0, 00:06:59.002 "data_size": 0 00:06:59.002 }, 00:06:59.002 { 00:06:59.002 "name": "BaseBdev2", 00:06:59.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.002 "is_configured": false, 00:06:59.002 "data_offset": 0, 00:06:59.002 "data_size": 0 00:06:59.002 } 00:06:59.002 ] 00:06:59.002 }' 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.002 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 [2024-10-25 17:48:17.711340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.570 [2024-10-25 17:48:17.711436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 [2024-10-25 17:48:17.723321] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.570 [2024-10-25 17:48:17.723407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.570 [2024-10-25 17:48:17.723433] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.570 [2024-10-25 17:48:17.723458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 [2024-10-25 17:48:17.768918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.570 BaseBdev1 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:59.570 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.571 [ 00:06:59.571 { 00:06:59.571 "name": "BaseBdev1", 00:06:59.571 "aliases": [ 00:06:59.571 "f436d77b-e300-4bd7-b3e5-0d5911a25f70" 00:06:59.571 ], 00:06:59.571 "product_name": "Malloc disk", 00:06:59.571 "block_size": 512, 00:06:59.571 "num_blocks": 65536, 00:06:59.571 "uuid": "f436d77b-e300-4bd7-b3e5-0d5911a25f70", 00:06:59.571 "assigned_rate_limits": { 00:06:59.571 "rw_ios_per_sec": 0, 00:06:59.571 "rw_mbytes_per_sec": 0, 00:06:59.571 "r_mbytes_per_sec": 0, 00:06:59.571 "w_mbytes_per_sec": 0 00:06:59.571 }, 00:06:59.571 "claimed": true, 00:06:59.571 "claim_type": "exclusive_write", 00:06:59.571 "zoned": false, 00:06:59.571 "supported_io_types": { 00:06:59.571 "read": true, 00:06:59.571 "write": true, 00:06:59.571 "unmap": true, 00:06:59.571 "flush": true, 00:06:59.571 "reset": true, 00:06:59.571 "nvme_admin": false, 00:06:59.571 "nvme_io": false, 00:06:59.571 "nvme_io_md": false, 00:06:59.571 "write_zeroes": true, 00:06:59.571 "zcopy": true, 00:06:59.571 "get_zone_info": false, 00:06:59.571 "zone_management": false, 00:06:59.571 "zone_append": false, 00:06:59.571 "compare": false, 00:06:59.571 "compare_and_write": false, 00:06:59.571 "abort": true, 00:06:59.571 "seek_hole": false, 00:06:59.571 "seek_data": false, 00:06:59.571 "copy": true, 00:06:59.571 "nvme_iov_md": false 00:06:59.571 }, 00:06:59.571 "memory_domains": [ 00:06:59.571 { 00:06:59.571 "dma_device_id": "system", 00:06:59.571 "dma_device_type": 1 00:06:59.571 }, 00:06:59.571 { 00:06:59.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.571 "dma_device_type": 2 00:06:59.571 } 00:06:59.571 ], 00:06:59.571 "driver_specific": {} 00:06:59.571 } 00:06:59.571 ] 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.571 "name": "Existed_Raid", 00:06:59.571 "uuid": "f30cc77f-0401-4b6c-9d33-8a8c68420c5b", 00:06:59.571 "strip_size_kb": 64, 00:06:59.571 "state": "configuring", 00:06:59.571 "raid_level": "raid0", 00:06:59.571 "superblock": true, 00:06:59.571 "num_base_bdevs": 2, 00:06:59.571 "num_base_bdevs_discovered": 1, 00:06:59.571 "num_base_bdevs_operational": 2, 00:06:59.571 "base_bdevs_list": [ 00:06:59.571 { 00:06:59.571 "name": "BaseBdev1", 00:06:59.571 "uuid": "f436d77b-e300-4bd7-b3e5-0d5911a25f70", 00:06:59.571 "is_configured": true, 00:06:59.571 "data_offset": 2048, 00:06:59.571 "data_size": 63488 00:06:59.571 }, 00:06:59.571 { 00:06:59.571 "name": "BaseBdev2", 00:06:59.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.571 "is_configured": false, 00:06:59.571 "data_offset": 0, 00:06:59.571 "data_size": 0 00:06:59.571 } 00:06:59.571 ] 00:06:59.571 }' 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.571 17:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.831 [2024-10-25 17:48:18.244120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.831 [2024-10-25 17:48:18.244160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.831 [2024-10-25 17:48:18.256167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.831 [2024-10-25 17:48:18.257892] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.831 [2024-10-25 17:48:18.257926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.831 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.091 "name": "Existed_Raid", 00:07:00.091 "uuid": "449c2043-3bdf-47b4-bbce-38647ab48300", 00:07:00.091 "strip_size_kb": 64, 00:07:00.091 "state": "configuring", 00:07:00.091 "raid_level": "raid0", 00:07:00.091 "superblock": true, 00:07:00.091 "num_base_bdevs": 2, 00:07:00.091 "num_base_bdevs_discovered": 1, 00:07:00.091 "num_base_bdevs_operational": 2, 00:07:00.091 "base_bdevs_list": [ 00:07:00.091 { 00:07:00.091 "name": "BaseBdev1", 00:07:00.091 "uuid": "f436d77b-e300-4bd7-b3e5-0d5911a25f70", 00:07:00.091 "is_configured": true, 00:07:00.091 "data_offset": 2048, 00:07:00.091 "data_size": 63488 00:07:00.091 }, 00:07:00.091 { 00:07:00.091 "name": "BaseBdev2", 00:07:00.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.091 "is_configured": false, 00:07:00.091 "data_offset": 0, 00:07:00.091 "data_size": 0 00:07:00.091 } 00:07:00.091 ] 00:07:00.091 }' 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.091 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.350 [2024-10-25 17:48:18.735815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.350 [2024-10-25 17:48:18.736179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:00.350 [2024-10-25 17:48:18.736196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.350 BaseBdev2 00:07:00.350 [2024-10-25 17:48:18.736455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.350 [2024-10-25 17:48:18.736606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:00.350 [2024-10-25 17:48:18.736618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:00.350 [2024-10-25 17:48:18.736762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.350 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.350 [ 00:07:00.350 { 00:07:00.350 "name": "BaseBdev2", 00:07:00.350 "aliases": [ 00:07:00.350 "20735711-1d57-4a2c-96fe-6d4373e55853" 00:07:00.350 ], 00:07:00.350 "product_name": "Malloc disk", 00:07:00.350 "block_size": 512, 00:07:00.350 "num_blocks": 65536, 00:07:00.350 "uuid": "20735711-1d57-4a2c-96fe-6d4373e55853", 00:07:00.350 "assigned_rate_limits": { 00:07:00.350 "rw_ios_per_sec": 0, 00:07:00.350 "rw_mbytes_per_sec": 0, 00:07:00.350 "r_mbytes_per_sec": 0, 00:07:00.350 "w_mbytes_per_sec": 0 00:07:00.350 }, 00:07:00.350 "claimed": true, 00:07:00.350 "claim_type": "exclusive_write", 00:07:00.350 "zoned": false, 00:07:00.350 "supported_io_types": { 00:07:00.350 "read": true, 00:07:00.350 "write": true, 00:07:00.350 "unmap": true, 00:07:00.350 "flush": true, 00:07:00.350 "reset": true, 00:07:00.350 "nvme_admin": false, 00:07:00.350 "nvme_io": false, 00:07:00.350 "nvme_io_md": false, 00:07:00.350 "write_zeroes": true, 00:07:00.350 "zcopy": true, 00:07:00.350 "get_zone_info": false, 00:07:00.350 "zone_management": false, 00:07:00.350 "zone_append": false, 00:07:00.350 "compare": false, 00:07:00.350 "compare_and_write": false, 00:07:00.350 "abort": true, 00:07:00.351 "seek_hole": false, 00:07:00.351 "seek_data": false, 00:07:00.351 "copy": true, 00:07:00.351 "nvme_iov_md": false 00:07:00.351 }, 00:07:00.351 "memory_domains": [ 00:07:00.351 { 00:07:00.351 "dma_device_id": "system", 00:07:00.351 "dma_device_type": 1 00:07:00.351 }, 00:07:00.351 { 00:07:00.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.351 "dma_device_type": 2 00:07:00.351 } 00:07:00.351 ], 00:07:00.351 "driver_specific": {} 00:07:00.351 } 00:07:00.351 ] 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.351 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.610 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.610 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.610 "name": "Existed_Raid", 00:07:00.610 "uuid": "449c2043-3bdf-47b4-bbce-38647ab48300", 00:07:00.610 "strip_size_kb": 64, 00:07:00.610 "state": "online", 00:07:00.610 "raid_level": "raid0", 00:07:00.610 "superblock": true, 00:07:00.610 "num_base_bdevs": 2, 00:07:00.610 "num_base_bdevs_discovered": 2, 00:07:00.610 "num_base_bdevs_operational": 2, 00:07:00.610 "base_bdevs_list": [ 00:07:00.610 { 00:07:00.610 "name": "BaseBdev1", 00:07:00.610 "uuid": "f436d77b-e300-4bd7-b3e5-0d5911a25f70", 00:07:00.610 "is_configured": true, 00:07:00.610 "data_offset": 2048, 00:07:00.610 "data_size": 63488 00:07:00.610 }, 00:07:00.610 { 00:07:00.610 "name": "BaseBdev2", 00:07:00.610 "uuid": "20735711-1d57-4a2c-96fe-6d4373e55853", 00:07:00.610 "is_configured": true, 00:07:00.610 "data_offset": 2048, 00:07:00.611 "data_size": 63488 00:07:00.611 } 00:07:00.611 ] 00:07:00.611 }' 00:07:00.611 17:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.611 17:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.871 [2024-10-25 17:48:19.207324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.871 "name": "Existed_Raid", 00:07:00.871 "aliases": [ 00:07:00.871 "449c2043-3bdf-47b4-bbce-38647ab48300" 00:07:00.871 ], 00:07:00.871 "product_name": "Raid Volume", 00:07:00.871 "block_size": 512, 00:07:00.871 "num_blocks": 126976, 00:07:00.871 "uuid": "449c2043-3bdf-47b4-bbce-38647ab48300", 00:07:00.871 "assigned_rate_limits": { 00:07:00.871 "rw_ios_per_sec": 0, 00:07:00.871 "rw_mbytes_per_sec": 0, 00:07:00.871 "r_mbytes_per_sec": 0, 00:07:00.871 "w_mbytes_per_sec": 0 00:07:00.871 }, 00:07:00.871 "claimed": false, 00:07:00.871 "zoned": false, 00:07:00.871 "supported_io_types": { 00:07:00.871 "read": true, 00:07:00.871 "write": true, 00:07:00.871 "unmap": true, 00:07:00.871 "flush": true, 00:07:00.871 "reset": true, 00:07:00.871 "nvme_admin": false, 00:07:00.871 "nvme_io": false, 00:07:00.871 "nvme_io_md": false, 00:07:00.871 "write_zeroes": true, 00:07:00.871 "zcopy": false, 00:07:00.871 "get_zone_info": false, 00:07:00.871 "zone_management": false, 00:07:00.871 "zone_append": false, 00:07:00.871 "compare": false, 00:07:00.871 "compare_and_write": false, 00:07:00.871 "abort": false, 00:07:00.871 "seek_hole": false, 00:07:00.871 "seek_data": false, 00:07:00.871 "copy": false, 00:07:00.871 "nvme_iov_md": false 00:07:00.871 }, 00:07:00.871 "memory_domains": [ 00:07:00.871 { 00:07:00.871 "dma_device_id": "system", 00:07:00.871 "dma_device_type": 1 00:07:00.871 }, 00:07:00.871 { 00:07:00.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.871 "dma_device_type": 2 00:07:00.871 }, 00:07:00.871 { 00:07:00.871 "dma_device_id": "system", 00:07:00.871 "dma_device_type": 1 00:07:00.871 }, 00:07:00.871 { 00:07:00.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.871 "dma_device_type": 2 00:07:00.871 } 00:07:00.871 ], 00:07:00.871 "driver_specific": { 00:07:00.871 "raid": { 00:07:00.871 "uuid": "449c2043-3bdf-47b4-bbce-38647ab48300", 00:07:00.871 "strip_size_kb": 64, 00:07:00.871 "state": "online", 00:07:00.871 "raid_level": "raid0", 00:07:00.871 "superblock": true, 00:07:00.871 "num_base_bdevs": 2, 00:07:00.871 "num_base_bdevs_discovered": 2, 00:07:00.871 "num_base_bdevs_operational": 2, 00:07:00.871 "base_bdevs_list": [ 00:07:00.871 { 00:07:00.871 "name": "BaseBdev1", 00:07:00.871 "uuid": "f436d77b-e300-4bd7-b3e5-0d5911a25f70", 00:07:00.871 "is_configured": true, 00:07:00.871 "data_offset": 2048, 00:07:00.871 "data_size": 63488 00:07:00.871 }, 00:07:00.871 { 00:07:00.871 "name": "BaseBdev2", 00:07:00.871 "uuid": "20735711-1d57-4a2c-96fe-6d4373e55853", 00:07:00.871 "is_configured": true, 00:07:00.871 "data_offset": 2048, 00:07:00.871 "data_size": 63488 00:07:00.871 } 00:07:00.871 ] 00:07:00.871 } 00:07:00.871 } 00:07:00.871 }' 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:00.871 BaseBdev2' 00:07:00.871 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.130 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.130 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.130 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.131 [2024-10-25 17:48:19.426681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.131 [2024-10-25 17:48:19.426713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.131 [2024-10-25 17:48:19.426762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.131 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.390 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.391 "name": "Existed_Raid", 00:07:01.391 "uuid": "449c2043-3bdf-47b4-bbce-38647ab48300", 00:07:01.391 "strip_size_kb": 64, 00:07:01.391 "state": "offline", 00:07:01.391 "raid_level": "raid0", 00:07:01.391 "superblock": true, 00:07:01.391 "num_base_bdevs": 2, 00:07:01.391 "num_base_bdevs_discovered": 1, 00:07:01.391 "num_base_bdevs_operational": 1, 00:07:01.391 "base_bdevs_list": [ 00:07:01.391 { 00:07:01.391 "name": null, 00:07:01.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.391 "is_configured": false, 00:07:01.391 "data_offset": 0, 00:07:01.391 "data_size": 63488 00:07:01.391 }, 00:07:01.391 { 00:07:01.391 "name": "BaseBdev2", 00:07:01.391 "uuid": "20735711-1d57-4a2c-96fe-6d4373e55853", 00:07:01.391 "is_configured": true, 00:07:01.391 "data_offset": 2048, 00:07:01.391 "data_size": 63488 00:07:01.391 } 00:07:01.391 ] 00:07:01.391 }' 00:07:01.391 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.391 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.650 17:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.650 [2024-10-25 17:48:19.967348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.650 [2024-10-25 17:48:19.967403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.650 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60781 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60781 ']' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60781 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60781 00:07:01.909 killing process with pid 60781 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60781' 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60781 00:07:01.909 [2024-10-25 17:48:20.153657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.909 17:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60781 00:07:01.910 [2024-10-25 17:48:20.169694] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.849 17:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.849 00:07:02.849 real 0m4.810s 00:07:02.849 user 0m6.957s 00:07:02.849 sys 0m0.794s 00:07:02.849 17:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.849 ************************************ 00:07:02.849 END TEST raid_state_function_test_sb 00:07:02.849 ************************************ 00:07:02.849 17:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.849 17:48:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:02.849 17:48:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.849 17:48:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.849 17:48:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.849 ************************************ 00:07:02.849 START TEST raid_superblock_test 00:07:02.849 ************************************ 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61030 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61030 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61030 ']' 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.849 17:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.109 [2024-10-25 17:48:21.356606] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:03.109 [2024-10-25 17:48:21.356797] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61030 ] 00:07:03.109 [2024-10-25 17:48:21.528420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.369 [2024-10-25 17:48:21.636985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.629 [2024-10-25 17:48:21.824386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.629 [2024-10-25 17:48:21.824473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 malloc1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 [2024-10-25 17:48:22.214325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:03.895 [2024-10-25 17:48:22.214472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.895 [2024-10-25 17:48:22.214511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:03.895 [2024-10-25 17:48:22.214538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.895 [2024-10-25 17:48:22.216580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.895 [2024-10-25 17:48:22.216650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:03.895 pt1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 malloc2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 [2024-10-25 17:48:22.268467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:03.895 [2024-10-25 17:48:22.268519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.895 [2024-10-25 17:48:22.268540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:03.895 [2024-10-25 17:48:22.268548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.895 [2024-10-25 17:48:22.270544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.895 [2024-10-25 17:48:22.270623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:03.895 pt2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 [2024-10-25 17:48:22.280485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:03.895 [2024-10-25 17:48:22.282279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:03.895 [2024-10-25 17:48:22.282424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:03.895 [2024-10-25 17:48:22.282437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.895 [2024-10-25 17:48:22.282657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.895 [2024-10-25 17:48:22.282793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:03.895 [2024-10-25 17:48:22.282804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:03.895 [2024-10-25 17:48:22.282951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.179 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.179 "name": "raid_bdev1", 00:07:04.179 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:04.179 "strip_size_kb": 64, 00:07:04.179 "state": "online", 00:07:04.179 "raid_level": "raid0", 00:07:04.179 "superblock": true, 00:07:04.179 "num_base_bdevs": 2, 00:07:04.179 "num_base_bdevs_discovered": 2, 00:07:04.179 "num_base_bdevs_operational": 2, 00:07:04.179 "base_bdevs_list": [ 00:07:04.179 { 00:07:04.179 "name": "pt1", 00:07:04.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.179 "is_configured": true, 00:07:04.179 "data_offset": 2048, 00:07:04.179 "data_size": 63488 00:07:04.179 }, 00:07:04.179 { 00:07:04.179 "name": "pt2", 00:07:04.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.179 "is_configured": true, 00:07:04.179 "data_offset": 2048, 00:07:04.179 "data_size": 63488 00:07:04.179 } 00:07:04.179 ] 00:07:04.179 }' 00:07:04.179 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.179 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.452 [2024-10-25 17:48:22.708070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.452 "name": "raid_bdev1", 00:07:04.452 "aliases": [ 00:07:04.452 "9db297f4-353c-466e-b70d-58c4d22a1573" 00:07:04.452 ], 00:07:04.452 "product_name": "Raid Volume", 00:07:04.452 "block_size": 512, 00:07:04.452 "num_blocks": 126976, 00:07:04.452 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:04.452 "assigned_rate_limits": { 00:07:04.452 "rw_ios_per_sec": 0, 00:07:04.452 "rw_mbytes_per_sec": 0, 00:07:04.452 "r_mbytes_per_sec": 0, 00:07:04.452 "w_mbytes_per_sec": 0 00:07:04.452 }, 00:07:04.452 "claimed": false, 00:07:04.452 "zoned": false, 00:07:04.452 "supported_io_types": { 00:07:04.452 "read": true, 00:07:04.452 "write": true, 00:07:04.452 "unmap": true, 00:07:04.452 "flush": true, 00:07:04.452 "reset": true, 00:07:04.452 "nvme_admin": false, 00:07:04.452 "nvme_io": false, 00:07:04.452 "nvme_io_md": false, 00:07:04.452 "write_zeroes": true, 00:07:04.452 "zcopy": false, 00:07:04.452 "get_zone_info": false, 00:07:04.452 "zone_management": false, 00:07:04.452 "zone_append": false, 00:07:04.452 "compare": false, 00:07:04.452 "compare_and_write": false, 00:07:04.452 "abort": false, 00:07:04.452 "seek_hole": false, 00:07:04.452 "seek_data": false, 00:07:04.452 "copy": false, 00:07:04.452 "nvme_iov_md": false 00:07:04.452 }, 00:07:04.452 "memory_domains": [ 00:07:04.452 { 00:07:04.452 "dma_device_id": "system", 00:07:04.452 "dma_device_type": 1 00:07:04.452 }, 00:07:04.452 { 00:07:04.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.452 "dma_device_type": 2 00:07:04.452 }, 00:07:04.452 { 00:07:04.452 "dma_device_id": "system", 00:07:04.452 "dma_device_type": 1 00:07:04.452 }, 00:07:04.452 { 00:07:04.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.452 "dma_device_type": 2 00:07:04.452 } 00:07:04.452 ], 00:07:04.452 "driver_specific": { 00:07:04.452 "raid": { 00:07:04.452 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:04.452 "strip_size_kb": 64, 00:07:04.452 "state": "online", 00:07:04.452 "raid_level": "raid0", 00:07:04.452 "superblock": true, 00:07:04.452 "num_base_bdevs": 2, 00:07:04.452 "num_base_bdevs_discovered": 2, 00:07:04.452 "num_base_bdevs_operational": 2, 00:07:04.452 "base_bdevs_list": [ 00:07:04.452 { 00:07:04.452 "name": "pt1", 00:07:04.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.452 "is_configured": true, 00:07:04.452 "data_offset": 2048, 00:07:04.452 "data_size": 63488 00:07:04.452 }, 00:07:04.452 { 00:07:04.452 "name": "pt2", 00:07:04.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.452 "is_configured": true, 00:07:04.452 "data_offset": 2048, 00:07:04.452 "data_size": 63488 00:07:04.452 } 00:07:04.452 ] 00:07:04.452 } 00:07:04.452 } 00:07:04.452 }' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:04.452 pt2' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.452 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:04.712 [2024-10-25 17:48:22.935636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9db297f4-353c-466e-b70d-58c4d22a1573 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9db297f4-353c-466e-b70d-58c4d22a1573 ']' 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 [2024-10-25 17:48:22.983317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.712 [2024-10-25 17:48:22.983393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.712 [2024-10-25 17:48:22.983474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.712 [2024-10-25 17:48:22.983520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.712 [2024-10-25 17:48:22.983532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:04.712 17:48:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.712 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 [2024-10-25 17:48:23.127075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:04.712 [2024-10-25 17:48:23.128864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:04.712 [2024-10-25 17:48:23.128927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:04.712 [2024-10-25 17:48:23.128971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:04.712 [2024-10-25 17:48:23.128985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.712 [2024-10-25 17:48:23.128996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:04.712 request: 00:07:04.712 { 00:07:04.712 "name": "raid_bdev1", 00:07:04.712 "raid_level": "raid0", 00:07:04.712 "base_bdevs": [ 00:07:04.712 "malloc1", 00:07:04.712 "malloc2" 00:07:04.712 ], 00:07:04.712 "strip_size_kb": 64, 00:07:04.712 "superblock": false, 00:07:04.712 "method": "bdev_raid_create", 00:07:04.713 "req_id": 1 00:07:04.713 } 00:07:04.713 Got JSON-RPC error response 00:07:04.713 response: 00:07:04.713 { 00:07:04.713 "code": -17, 00:07:04.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:04.713 } 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.713 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.973 [2024-10-25 17:48:23.190945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:04.973 [2024-10-25 17:48:23.191039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.973 [2024-10-25 17:48:23.191073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:04.973 [2024-10-25 17:48:23.191102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.973 [2024-10-25 17:48:23.193211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.973 [2024-10-25 17:48:23.193280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:04.973 [2024-10-25 17:48:23.193365] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:04.973 [2024-10-25 17:48:23.193436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:04.973 pt1 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.973 "name": "raid_bdev1", 00:07:04.973 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:04.973 "strip_size_kb": 64, 00:07:04.973 "state": "configuring", 00:07:04.973 "raid_level": "raid0", 00:07:04.973 "superblock": true, 00:07:04.973 "num_base_bdevs": 2, 00:07:04.973 "num_base_bdevs_discovered": 1, 00:07:04.973 "num_base_bdevs_operational": 2, 00:07:04.973 "base_bdevs_list": [ 00:07:04.973 { 00:07:04.973 "name": "pt1", 00:07:04.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.973 "is_configured": true, 00:07:04.973 "data_offset": 2048, 00:07:04.973 "data_size": 63488 00:07:04.973 }, 00:07:04.973 { 00:07:04.973 "name": null, 00:07:04.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.973 "is_configured": false, 00:07:04.973 "data_offset": 2048, 00:07:04.973 "data_size": 63488 00:07:04.973 } 00:07:04.973 ] 00:07:04.973 }' 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.973 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.233 [2024-10-25 17:48:23.654156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:05.233 [2024-10-25 17:48:23.654257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.233 [2024-10-25 17:48:23.654278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:05.233 [2024-10-25 17:48:23.654288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.233 [2024-10-25 17:48:23.654657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.233 [2024-10-25 17:48:23.654678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:05.233 [2024-10-25 17:48:23.654739] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:05.233 [2024-10-25 17:48:23.654758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:05.233 [2024-10-25 17:48:23.654895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:05.233 [2024-10-25 17:48:23.654907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.233 [2024-10-25 17:48:23.655125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:05.233 [2024-10-25 17:48:23.655270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:05.233 [2024-10-25 17:48:23.655286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:05.233 [2024-10-25 17:48:23.655409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.233 pt2 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.233 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.234 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.234 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.234 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.493 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.493 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.493 "name": "raid_bdev1", 00:07:05.493 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:05.493 "strip_size_kb": 64, 00:07:05.493 "state": "online", 00:07:05.493 "raid_level": "raid0", 00:07:05.493 "superblock": true, 00:07:05.493 "num_base_bdevs": 2, 00:07:05.493 "num_base_bdevs_discovered": 2, 00:07:05.493 "num_base_bdevs_operational": 2, 00:07:05.493 "base_bdevs_list": [ 00:07:05.493 { 00:07:05.493 "name": "pt1", 00:07:05.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:05.493 "is_configured": true, 00:07:05.493 "data_offset": 2048, 00:07:05.493 "data_size": 63488 00:07:05.493 }, 00:07:05.493 { 00:07:05.493 "name": "pt2", 00:07:05.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.493 "is_configured": true, 00:07:05.493 "data_offset": 2048, 00:07:05.493 "data_size": 63488 00:07:05.493 } 00:07:05.493 ] 00:07:05.493 }' 00:07:05.493 17:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.493 17:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.753 [2024-10-25 17:48:24.113735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.753 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.753 "name": "raid_bdev1", 00:07:05.753 "aliases": [ 00:07:05.753 "9db297f4-353c-466e-b70d-58c4d22a1573" 00:07:05.753 ], 00:07:05.753 "product_name": "Raid Volume", 00:07:05.753 "block_size": 512, 00:07:05.753 "num_blocks": 126976, 00:07:05.753 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:05.753 "assigned_rate_limits": { 00:07:05.753 "rw_ios_per_sec": 0, 00:07:05.753 "rw_mbytes_per_sec": 0, 00:07:05.753 "r_mbytes_per_sec": 0, 00:07:05.753 "w_mbytes_per_sec": 0 00:07:05.753 }, 00:07:05.753 "claimed": false, 00:07:05.753 "zoned": false, 00:07:05.753 "supported_io_types": { 00:07:05.753 "read": true, 00:07:05.753 "write": true, 00:07:05.753 "unmap": true, 00:07:05.753 "flush": true, 00:07:05.753 "reset": true, 00:07:05.753 "nvme_admin": false, 00:07:05.753 "nvme_io": false, 00:07:05.753 "nvme_io_md": false, 00:07:05.753 "write_zeroes": true, 00:07:05.753 "zcopy": false, 00:07:05.753 "get_zone_info": false, 00:07:05.753 "zone_management": false, 00:07:05.753 "zone_append": false, 00:07:05.753 "compare": false, 00:07:05.753 "compare_and_write": false, 00:07:05.753 "abort": false, 00:07:05.753 "seek_hole": false, 00:07:05.753 "seek_data": false, 00:07:05.753 "copy": false, 00:07:05.753 "nvme_iov_md": false 00:07:05.753 }, 00:07:05.753 "memory_domains": [ 00:07:05.753 { 00:07:05.753 "dma_device_id": "system", 00:07:05.753 "dma_device_type": 1 00:07:05.753 }, 00:07:05.753 { 00:07:05.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.753 "dma_device_type": 2 00:07:05.753 }, 00:07:05.753 { 00:07:05.753 "dma_device_id": "system", 00:07:05.753 "dma_device_type": 1 00:07:05.753 }, 00:07:05.753 { 00:07:05.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.753 "dma_device_type": 2 00:07:05.753 } 00:07:05.753 ], 00:07:05.753 "driver_specific": { 00:07:05.753 "raid": { 00:07:05.753 "uuid": "9db297f4-353c-466e-b70d-58c4d22a1573", 00:07:05.753 "strip_size_kb": 64, 00:07:05.753 "state": "online", 00:07:05.753 "raid_level": "raid0", 00:07:05.753 "superblock": true, 00:07:05.753 "num_base_bdevs": 2, 00:07:05.753 "num_base_bdevs_discovered": 2, 00:07:05.753 "num_base_bdevs_operational": 2, 00:07:05.753 "base_bdevs_list": [ 00:07:05.753 { 00:07:05.753 "name": "pt1", 00:07:05.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:05.754 "is_configured": true, 00:07:05.754 "data_offset": 2048, 00:07:05.754 "data_size": 63488 00:07:05.754 }, 00:07:05.754 { 00:07:05.754 "name": "pt2", 00:07:05.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.754 "is_configured": true, 00:07:05.754 "data_offset": 2048, 00:07:05.754 "data_size": 63488 00:07:05.754 } 00:07:05.754 ] 00:07:05.754 } 00:07:05.754 } 00:07:05.754 }' 00:07:05.754 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.754 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:05.754 pt2' 00:07:05.754 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.013 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:06.013 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.013 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:06.013 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:06.014 [2024-10-25 17:48:24.341266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9db297f4-353c-466e-b70d-58c4d22a1573 '!=' 9db297f4-353c-466e-b70d-58c4d22a1573 ']' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61030 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61030 ']' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61030 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61030 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61030' 00:07:06.014 killing process with pid 61030 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61030 00:07:06.014 [2024-10-25 17:48:24.430025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.014 [2024-10-25 17:48:24.430163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.014 17:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61030 00:07:06.014 [2024-10-25 17:48:24.430238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.014 [2024-10-25 17:48:24.430258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:06.273 [2024-10-25 17:48:24.624306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.652 17:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:07.652 00:07:07.652 real 0m4.390s 00:07:07.652 user 0m6.194s 00:07:07.652 sys 0m0.727s 00:07:07.652 17:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.652 ************************************ 00:07:07.652 END TEST raid_superblock_test 00:07:07.652 ************************************ 00:07:07.652 17:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 17:48:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:07.652 17:48:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:07.652 17:48:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.652 17:48:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 ************************************ 00:07:07.652 START TEST raid_read_error_test 00:07:07.652 ************************************ 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:07.652 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5C2qFME84k 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61236 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61236 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61236 ']' 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.653 17:48:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.653 [2024-10-25 17:48:25.836051] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:07.653 [2024-10-25 17:48:25.836236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61236 ] 00:07:07.653 [2024-10-25 17:48:26.008281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.913 [2024-10-25 17:48:26.114700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.913 [2024-10-25 17:48:26.291904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.913 [2024-10-25 17:48:26.292020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 BaseBdev1_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 true 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 [2024-10-25 17:48:26.706814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:08.484 [2024-10-25 17:48:26.706886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.484 [2024-10-25 17:48:26.706905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:08.484 [2024-10-25 17:48:26.706914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.484 [2024-10-25 17:48:26.708942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.484 [2024-10-25 17:48:26.708982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:08.484 BaseBdev1 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 BaseBdev2_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 true 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 [2024-10-25 17:48:26.769861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:08.484 [2024-10-25 17:48:26.769992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.484 [2024-10-25 17:48:26.770011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:08.484 [2024-10-25 17:48:26.770021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.484 [2024-10-25 17:48:26.772019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.484 [2024-10-25 17:48:26.772069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:08.484 BaseBdev2 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 [2024-10-25 17:48:26.781899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.484 [2024-10-25 17:48:26.783644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:08.484 [2024-10-25 17:48:26.783821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:08.484 [2024-10-25 17:48:26.783848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.484 [2024-10-25 17:48:26.784070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.484 [2024-10-25 17:48:26.784246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:08.484 [2024-10-25 17:48:26.784258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:08.484 [2024-10-25 17:48:26.784406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.484 "name": "raid_bdev1", 00:07:08.484 "uuid": "ea6ab88a-8126-470c-a6bb-927f6dc8210d", 00:07:08.484 "strip_size_kb": 64, 00:07:08.484 "state": "online", 00:07:08.484 "raid_level": "raid0", 00:07:08.484 "superblock": true, 00:07:08.484 "num_base_bdevs": 2, 00:07:08.484 "num_base_bdevs_discovered": 2, 00:07:08.484 "num_base_bdevs_operational": 2, 00:07:08.484 "base_bdevs_list": [ 00:07:08.484 { 00:07:08.484 "name": "BaseBdev1", 00:07:08.484 "uuid": "bc36d721-472e-5976-b5b3-afb6300dc767", 00:07:08.484 "is_configured": true, 00:07:08.484 "data_offset": 2048, 00:07:08.484 "data_size": 63488 00:07:08.484 }, 00:07:08.484 { 00:07:08.484 "name": "BaseBdev2", 00:07:08.484 "uuid": "37f3a05d-4917-5858-adf1-08a5998234d6", 00:07:08.484 "is_configured": true, 00:07:08.484 "data_offset": 2048, 00:07:08.484 "data_size": 63488 00:07:08.484 } 00:07:08.484 ] 00:07:08.484 }' 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.484 17:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.054 17:48:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:09.054 17:48:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:09.054 [2024-10-25 17:48:27.338090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.994 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.995 "name": "raid_bdev1", 00:07:09.995 "uuid": "ea6ab88a-8126-470c-a6bb-927f6dc8210d", 00:07:09.995 "strip_size_kb": 64, 00:07:09.995 "state": "online", 00:07:09.995 "raid_level": "raid0", 00:07:09.995 "superblock": true, 00:07:09.995 "num_base_bdevs": 2, 00:07:09.995 "num_base_bdevs_discovered": 2, 00:07:09.995 "num_base_bdevs_operational": 2, 00:07:09.995 "base_bdevs_list": [ 00:07:09.995 { 00:07:09.995 "name": "BaseBdev1", 00:07:09.995 "uuid": "bc36d721-472e-5976-b5b3-afb6300dc767", 00:07:09.995 "is_configured": true, 00:07:09.995 "data_offset": 2048, 00:07:09.995 "data_size": 63488 00:07:09.995 }, 00:07:09.995 { 00:07:09.995 "name": "BaseBdev2", 00:07:09.995 "uuid": "37f3a05d-4917-5858-adf1-08a5998234d6", 00:07:09.995 "is_configured": true, 00:07:09.995 "data_offset": 2048, 00:07:09.995 "data_size": 63488 00:07:09.995 } 00:07:09.995 ] 00:07:09.995 }' 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.995 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.255 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:10.255 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.255 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.515 [2024-10-25 17:48:28.695969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.516 [2024-10-25 17:48:28.696104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.516 [2024-10-25 17:48:28.698699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.516 [2024-10-25 17:48:28.698780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.516 [2024-10-25 17:48:28.698839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.516 [2024-10-25 17:48:28.698883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:10.516 { 00:07:10.516 "results": [ 00:07:10.516 { 00:07:10.516 "job": "raid_bdev1", 00:07:10.516 "core_mask": "0x1", 00:07:10.516 "workload": "randrw", 00:07:10.516 "percentage": 50, 00:07:10.516 "status": "finished", 00:07:10.516 "queue_depth": 1, 00:07:10.516 "io_size": 131072, 00:07:10.516 "runtime": 1.358933, 00:07:10.516 "iops": 17633.687606379415, 00:07:10.516 "mibps": 2204.210950797427, 00:07:10.516 "io_failed": 1, 00:07:10.516 "io_timeout": 0, 00:07:10.516 "avg_latency_us": 78.6123697919514, 00:07:10.516 "min_latency_us": 24.258515283842794, 00:07:10.516 "max_latency_us": 1395.1441048034935 00:07:10.516 } 00:07:10.516 ], 00:07:10.516 "core_count": 1 00:07:10.516 } 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61236 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61236 ']' 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61236 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61236 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61236' 00:07:10.516 killing process with pid 61236 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61236 00:07:10.516 [2024-10-25 17:48:28.745050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.516 17:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61236 00:07:10.516 [2024-10-25 17:48:28.873447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5C2qFME84k 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:11.900 00:07:11.900 real 0m4.239s 00:07:11.900 user 0m5.082s 00:07:11.900 sys 0m0.542s 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.900 ************************************ 00:07:11.900 END TEST raid_read_error_test 00:07:11.900 ************************************ 00:07:11.900 17:48:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.900 17:48:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:11.900 17:48:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.900 17:48:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.900 17:48:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.900 ************************************ 00:07:11.900 START TEST raid_write_error_test 00:07:11.900 ************************************ 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PaULC3v9UH 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61382 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61382 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61382 ']' 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.900 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.900 [2024-10-25 17:48:30.143538] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:11.900 [2024-10-25 17:48:30.143723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:07:11.900 [2024-10-25 17:48:30.309395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.161 [2024-10-25 17:48:30.411756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.422 [2024-10-25 17:48:30.602750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.422 [2024-10-25 17:48:30.602804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.682 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.682 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.683 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.683 17:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.683 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 BaseBdev1_malloc 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 true 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 [2024-10-25 17:48:31.019722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.683 [2024-10-25 17:48:31.019786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.683 [2024-10-25 17:48:31.019805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:12.683 [2024-10-25 17:48:31.019816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.683 [2024-10-25 17:48:31.021991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.683 [2024-10-25 17:48:31.022032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.683 BaseBdev1 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 BaseBdev2_malloc 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 true 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 [2024-10-25 17:48:31.085174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.683 [2024-10-25 17:48:31.085231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.683 [2024-10-25 17:48:31.085248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.683 [2024-10-25 17:48:31.085269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.683 [2024-10-25 17:48:31.087308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.683 [2024-10-25 17:48:31.087348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.683 BaseBdev2 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.683 [2024-10-25 17:48:31.097215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.683 [2024-10-25 17:48:31.098929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.683 [2024-10-25 17:48:31.099106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.683 [2024-10-25 17:48:31.099123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.683 [2024-10-25 17:48:31.099328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.683 [2024-10-25 17:48:31.099494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.683 [2024-10-25 17:48:31.099506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.683 [2024-10-25 17:48:31.099666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.683 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.943 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.943 "name": "raid_bdev1", 00:07:12.943 "uuid": "7af631b5-d36c-4384-8432-23629c4a6181", 00:07:12.943 "strip_size_kb": 64, 00:07:12.943 "state": "online", 00:07:12.943 "raid_level": "raid0", 00:07:12.943 "superblock": true, 00:07:12.943 "num_base_bdevs": 2, 00:07:12.943 "num_base_bdevs_discovered": 2, 00:07:12.943 "num_base_bdevs_operational": 2, 00:07:12.943 "base_bdevs_list": [ 00:07:12.943 { 00:07:12.943 "name": "BaseBdev1", 00:07:12.943 "uuid": "c8ebc1cf-c6b0-5ffb-8abc-cec2488e7af7", 00:07:12.943 "is_configured": true, 00:07:12.943 "data_offset": 2048, 00:07:12.943 "data_size": 63488 00:07:12.943 }, 00:07:12.943 { 00:07:12.943 "name": "BaseBdev2", 00:07:12.943 "uuid": "1e44368c-3414-593d-9de0-4bd7bd02175c", 00:07:12.943 "is_configured": true, 00:07:12.943 "data_offset": 2048, 00:07:12.943 "data_size": 63488 00:07:12.943 } 00:07:12.943 ] 00:07:12.943 }' 00:07:12.943 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.943 17:48:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.204 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:13.204 17:48:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:13.204 [2024-10-25 17:48:31.625554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.144 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.145 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.404 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.404 "name": "raid_bdev1", 00:07:14.404 "uuid": "7af631b5-d36c-4384-8432-23629c4a6181", 00:07:14.404 "strip_size_kb": 64, 00:07:14.404 "state": "online", 00:07:14.404 "raid_level": "raid0", 00:07:14.404 "superblock": true, 00:07:14.404 "num_base_bdevs": 2, 00:07:14.404 "num_base_bdevs_discovered": 2, 00:07:14.404 "num_base_bdevs_operational": 2, 00:07:14.404 "base_bdevs_list": [ 00:07:14.404 { 00:07:14.404 "name": "BaseBdev1", 00:07:14.404 "uuid": "c8ebc1cf-c6b0-5ffb-8abc-cec2488e7af7", 00:07:14.404 "is_configured": true, 00:07:14.404 "data_offset": 2048, 00:07:14.404 "data_size": 63488 00:07:14.404 }, 00:07:14.404 { 00:07:14.404 "name": "BaseBdev2", 00:07:14.404 "uuid": "1e44368c-3414-593d-9de0-4bd7bd02175c", 00:07:14.404 "is_configured": true, 00:07:14.404 "data_offset": 2048, 00:07:14.404 "data_size": 63488 00:07:14.404 } 00:07:14.404 ] 00:07:14.404 }' 00:07:14.404 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.404 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 [2024-10-25 17:48:32.991425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.664 [2024-10-25 17:48:32.991551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.664 [2024-10-25 17:48:32.994123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.664 [2024-10-25 17:48:32.994209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.664 [2024-10-25 17:48:32.994257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.664 [2024-10-25 17:48:32.994300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.664 { 00:07:14.664 "results": [ 00:07:14.664 { 00:07:14.664 "job": "raid_bdev1", 00:07:14.664 "core_mask": "0x1", 00:07:14.664 "workload": "randrw", 00:07:14.664 "percentage": 50, 00:07:14.664 "status": "finished", 00:07:14.664 "queue_depth": 1, 00:07:14.664 "io_size": 131072, 00:07:14.664 "runtime": 1.366834, 00:07:14.664 "iops": 17481.27424398281, 00:07:14.664 "mibps": 2185.1592804978513, 00:07:14.664 "io_failed": 1, 00:07:14.664 "io_timeout": 0, 00:07:14.664 "avg_latency_us": 79.44993524252301, 00:07:14.664 "min_latency_us": 24.370305676855896, 00:07:14.664 "max_latency_us": 1402.2986899563318 00:07:14.664 } 00:07:14.664 ], 00:07:14.664 "core_count": 1 00:07:14.664 } 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61382 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61382 ']' 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61382 00:07:14.664 17:48:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61382 00:07:14.664 killing process with pid 61382 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61382' 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61382 00:07:14.664 [2024-10-25 17:48:33.026378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.664 17:48:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61382 00:07:14.925 [2024-10-25 17:48:33.152773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PaULC3v9UH 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:15.865 ************************************ 00:07:15.865 END TEST raid_write_error_test 00:07:15.865 ************************************ 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:15.865 00:07:15.865 real 0m4.213s 00:07:15.865 user 0m5.030s 00:07:15.865 sys 0m0.543s 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.865 17:48:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.125 17:48:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:16.125 17:48:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:16.125 17:48:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.125 17:48:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.125 17:48:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.125 ************************************ 00:07:16.125 START TEST raid_state_function_test 00:07:16.125 ************************************ 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61520 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61520' 00:07:16.125 Process raid pid: 61520 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61520 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61520 ']' 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.125 17:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.125 [2024-10-25 17:48:34.425817] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:16.125 [2024-10-25 17:48:34.426034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.385 [2024-10-25 17:48:34.599251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.386 [2024-10-25 17:48:34.709111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.646 [2024-10-25 17:48:34.898210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.646 [2024-10-25 17:48:34.898320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.905 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.905 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:16.905 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.905 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.905 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.905 [2024-10-25 17:48:35.246026] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.905 [2024-10-25 17:48:35.246155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.905 [2024-10-25 17:48:35.246170] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.906 [2024-10-25 17:48:35.246179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.906 "name": "Existed_Raid", 00:07:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.906 "strip_size_kb": 64, 00:07:16.906 "state": "configuring", 00:07:16.906 "raid_level": "concat", 00:07:16.906 "superblock": false, 00:07:16.906 "num_base_bdevs": 2, 00:07:16.906 "num_base_bdevs_discovered": 0, 00:07:16.906 "num_base_bdevs_operational": 2, 00:07:16.906 "base_bdevs_list": [ 00:07:16.906 { 00:07:16.906 "name": "BaseBdev1", 00:07:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.906 "is_configured": false, 00:07:16.906 "data_offset": 0, 00:07:16.906 "data_size": 0 00:07:16.906 }, 00:07:16.906 { 00:07:16.906 "name": "BaseBdev2", 00:07:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.906 "is_configured": false, 00:07:16.906 "data_offset": 0, 00:07:16.906 "data_size": 0 00:07:16.906 } 00:07:16.906 ] 00:07:16.906 }' 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.906 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.476 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.476 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.476 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.476 [2024-10-25 17:48:35.681218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.476 [2024-10-25 17:48:35.681252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.477 [2024-10-25 17:48:35.689208] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.477 [2024-10-25 17:48:35.689251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.477 [2024-10-25 17:48:35.689260] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.477 [2024-10-25 17:48:35.689271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.477 [2024-10-25 17:48:35.730305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.477 BaseBdev1 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.477 [ 00:07:17.477 { 00:07:17.477 "name": "BaseBdev1", 00:07:17.477 "aliases": [ 00:07:17.477 "3e5830f1-79c4-47a6-9d83-fda9bf731fbc" 00:07:17.477 ], 00:07:17.477 "product_name": "Malloc disk", 00:07:17.477 "block_size": 512, 00:07:17.477 "num_blocks": 65536, 00:07:17.477 "uuid": "3e5830f1-79c4-47a6-9d83-fda9bf731fbc", 00:07:17.477 "assigned_rate_limits": { 00:07:17.477 "rw_ios_per_sec": 0, 00:07:17.477 "rw_mbytes_per_sec": 0, 00:07:17.477 "r_mbytes_per_sec": 0, 00:07:17.477 "w_mbytes_per_sec": 0 00:07:17.477 }, 00:07:17.477 "claimed": true, 00:07:17.477 "claim_type": "exclusive_write", 00:07:17.477 "zoned": false, 00:07:17.477 "supported_io_types": { 00:07:17.477 "read": true, 00:07:17.477 "write": true, 00:07:17.477 "unmap": true, 00:07:17.477 "flush": true, 00:07:17.477 "reset": true, 00:07:17.477 "nvme_admin": false, 00:07:17.477 "nvme_io": false, 00:07:17.477 "nvme_io_md": false, 00:07:17.477 "write_zeroes": true, 00:07:17.477 "zcopy": true, 00:07:17.477 "get_zone_info": false, 00:07:17.477 "zone_management": false, 00:07:17.477 "zone_append": false, 00:07:17.477 "compare": false, 00:07:17.477 "compare_and_write": false, 00:07:17.477 "abort": true, 00:07:17.477 "seek_hole": false, 00:07:17.477 "seek_data": false, 00:07:17.477 "copy": true, 00:07:17.477 "nvme_iov_md": false 00:07:17.477 }, 00:07:17.477 "memory_domains": [ 00:07:17.477 { 00:07:17.477 "dma_device_id": "system", 00:07:17.477 "dma_device_type": 1 00:07:17.477 }, 00:07:17.477 { 00:07:17.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.477 "dma_device_type": 2 00:07:17.477 } 00:07:17.477 ], 00:07:17.477 "driver_specific": {} 00:07:17.477 } 00:07:17.477 ] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.477 "name": "Existed_Raid", 00:07:17.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.477 "strip_size_kb": 64, 00:07:17.477 "state": "configuring", 00:07:17.477 "raid_level": "concat", 00:07:17.477 "superblock": false, 00:07:17.477 "num_base_bdevs": 2, 00:07:17.477 "num_base_bdevs_discovered": 1, 00:07:17.477 "num_base_bdevs_operational": 2, 00:07:17.477 "base_bdevs_list": [ 00:07:17.477 { 00:07:17.477 "name": "BaseBdev1", 00:07:17.477 "uuid": "3e5830f1-79c4-47a6-9d83-fda9bf731fbc", 00:07:17.477 "is_configured": true, 00:07:17.477 "data_offset": 0, 00:07:17.477 "data_size": 65536 00:07:17.477 }, 00:07:17.477 { 00:07:17.477 "name": "BaseBdev2", 00:07:17.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.477 "is_configured": false, 00:07:17.477 "data_offset": 0, 00:07:17.477 "data_size": 0 00:07:17.477 } 00:07:17.477 ] 00:07:17.477 }' 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.477 17:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.047 [2024-10-25 17:48:36.181566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.047 [2024-10-25 17:48:36.181611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.047 [2024-10-25 17:48:36.193607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.047 [2024-10-25 17:48:36.195322] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.047 [2024-10-25 17:48:36.195360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.047 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.047 "name": "Existed_Raid", 00:07:18.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.047 "strip_size_kb": 64, 00:07:18.047 "state": "configuring", 00:07:18.047 "raid_level": "concat", 00:07:18.047 "superblock": false, 00:07:18.047 "num_base_bdevs": 2, 00:07:18.047 "num_base_bdevs_discovered": 1, 00:07:18.047 "num_base_bdevs_operational": 2, 00:07:18.047 "base_bdevs_list": [ 00:07:18.047 { 00:07:18.047 "name": "BaseBdev1", 00:07:18.048 "uuid": "3e5830f1-79c4-47a6-9d83-fda9bf731fbc", 00:07:18.048 "is_configured": true, 00:07:18.048 "data_offset": 0, 00:07:18.048 "data_size": 65536 00:07:18.048 }, 00:07:18.048 { 00:07:18.048 "name": "BaseBdev2", 00:07:18.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.048 "is_configured": false, 00:07:18.048 "data_offset": 0, 00:07:18.048 "data_size": 0 00:07:18.048 } 00:07:18.048 ] 00:07:18.048 }' 00:07:18.048 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.048 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.307 [2024-10-25 17:48:36.665036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.307 [2024-10-25 17:48:36.665165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.307 [2024-10-25 17:48:36.665189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:18.307 [2024-10-25 17:48:36.665492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.307 [2024-10-25 17:48:36.665682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.307 [2024-10-25 17:48:36.665729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:18.307 [2024-10-25 17:48:36.666031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.307 BaseBdev2 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.307 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.308 [ 00:07:18.308 { 00:07:18.308 "name": "BaseBdev2", 00:07:18.308 "aliases": [ 00:07:18.308 "0448d5b1-e43e-4917-b727-5c6eb3d4ba64" 00:07:18.308 ], 00:07:18.308 "product_name": "Malloc disk", 00:07:18.308 "block_size": 512, 00:07:18.308 "num_blocks": 65536, 00:07:18.308 "uuid": "0448d5b1-e43e-4917-b727-5c6eb3d4ba64", 00:07:18.308 "assigned_rate_limits": { 00:07:18.308 "rw_ios_per_sec": 0, 00:07:18.308 "rw_mbytes_per_sec": 0, 00:07:18.308 "r_mbytes_per_sec": 0, 00:07:18.308 "w_mbytes_per_sec": 0 00:07:18.308 }, 00:07:18.308 "claimed": true, 00:07:18.308 "claim_type": "exclusive_write", 00:07:18.308 "zoned": false, 00:07:18.308 "supported_io_types": { 00:07:18.308 "read": true, 00:07:18.308 "write": true, 00:07:18.308 "unmap": true, 00:07:18.308 "flush": true, 00:07:18.308 "reset": true, 00:07:18.308 "nvme_admin": false, 00:07:18.308 "nvme_io": false, 00:07:18.308 "nvme_io_md": false, 00:07:18.308 "write_zeroes": true, 00:07:18.308 "zcopy": true, 00:07:18.308 "get_zone_info": false, 00:07:18.308 "zone_management": false, 00:07:18.308 "zone_append": false, 00:07:18.308 "compare": false, 00:07:18.308 "compare_and_write": false, 00:07:18.308 "abort": true, 00:07:18.308 "seek_hole": false, 00:07:18.308 "seek_data": false, 00:07:18.308 "copy": true, 00:07:18.308 "nvme_iov_md": false 00:07:18.308 }, 00:07:18.308 "memory_domains": [ 00:07:18.308 { 00:07:18.308 "dma_device_id": "system", 00:07:18.308 "dma_device_type": 1 00:07:18.308 }, 00:07:18.308 { 00:07:18.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.308 "dma_device_type": 2 00:07:18.308 } 00:07:18.308 ], 00:07:18.308 "driver_specific": {} 00:07:18.308 } 00:07:18.308 ] 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.308 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.567 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.567 "name": "Existed_Raid", 00:07:18.567 "uuid": "3cbd76cc-e843-4a69-a008-428c941e3236", 00:07:18.567 "strip_size_kb": 64, 00:07:18.567 "state": "online", 00:07:18.567 "raid_level": "concat", 00:07:18.567 "superblock": false, 00:07:18.567 "num_base_bdevs": 2, 00:07:18.567 "num_base_bdevs_discovered": 2, 00:07:18.567 "num_base_bdevs_operational": 2, 00:07:18.567 "base_bdevs_list": [ 00:07:18.567 { 00:07:18.567 "name": "BaseBdev1", 00:07:18.567 "uuid": "3e5830f1-79c4-47a6-9d83-fda9bf731fbc", 00:07:18.567 "is_configured": true, 00:07:18.567 "data_offset": 0, 00:07:18.567 "data_size": 65536 00:07:18.567 }, 00:07:18.567 { 00:07:18.567 "name": "BaseBdev2", 00:07:18.567 "uuid": "0448d5b1-e43e-4917-b727-5c6eb3d4ba64", 00:07:18.567 "is_configured": true, 00:07:18.567 "data_offset": 0, 00:07:18.567 "data_size": 65536 00:07:18.567 } 00:07:18.567 ] 00:07:18.567 }' 00:07:18.567 17:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.568 17:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.828 [2024-10-25 17:48:37.124552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.828 "name": "Existed_Raid", 00:07:18.828 "aliases": [ 00:07:18.828 "3cbd76cc-e843-4a69-a008-428c941e3236" 00:07:18.828 ], 00:07:18.828 "product_name": "Raid Volume", 00:07:18.828 "block_size": 512, 00:07:18.828 "num_blocks": 131072, 00:07:18.828 "uuid": "3cbd76cc-e843-4a69-a008-428c941e3236", 00:07:18.828 "assigned_rate_limits": { 00:07:18.828 "rw_ios_per_sec": 0, 00:07:18.828 "rw_mbytes_per_sec": 0, 00:07:18.828 "r_mbytes_per_sec": 0, 00:07:18.828 "w_mbytes_per_sec": 0 00:07:18.828 }, 00:07:18.828 "claimed": false, 00:07:18.828 "zoned": false, 00:07:18.828 "supported_io_types": { 00:07:18.828 "read": true, 00:07:18.828 "write": true, 00:07:18.828 "unmap": true, 00:07:18.828 "flush": true, 00:07:18.828 "reset": true, 00:07:18.828 "nvme_admin": false, 00:07:18.828 "nvme_io": false, 00:07:18.828 "nvme_io_md": false, 00:07:18.828 "write_zeroes": true, 00:07:18.828 "zcopy": false, 00:07:18.828 "get_zone_info": false, 00:07:18.828 "zone_management": false, 00:07:18.828 "zone_append": false, 00:07:18.828 "compare": false, 00:07:18.828 "compare_and_write": false, 00:07:18.828 "abort": false, 00:07:18.828 "seek_hole": false, 00:07:18.828 "seek_data": false, 00:07:18.828 "copy": false, 00:07:18.828 "nvme_iov_md": false 00:07:18.828 }, 00:07:18.828 "memory_domains": [ 00:07:18.828 { 00:07:18.828 "dma_device_id": "system", 00:07:18.828 "dma_device_type": 1 00:07:18.828 }, 00:07:18.828 { 00:07:18.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.828 "dma_device_type": 2 00:07:18.828 }, 00:07:18.828 { 00:07:18.828 "dma_device_id": "system", 00:07:18.828 "dma_device_type": 1 00:07:18.828 }, 00:07:18.828 { 00:07:18.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.828 "dma_device_type": 2 00:07:18.828 } 00:07:18.828 ], 00:07:18.828 "driver_specific": { 00:07:18.828 "raid": { 00:07:18.828 "uuid": "3cbd76cc-e843-4a69-a008-428c941e3236", 00:07:18.828 "strip_size_kb": 64, 00:07:18.828 "state": "online", 00:07:18.828 "raid_level": "concat", 00:07:18.828 "superblock": false, 00:07:18.828 "num_base_bdevs": 2, 00:07:18.828 "num_base_bdevs_discovered": 2, 00:07:18.828 "num_base_bdevs_operational": 2, 00:07:18.828 "base_bdevs_list": [ 00:07:18.828 { 00:07:18.828 "name": "BaseBdev1", 00:07:18.828 "uuid": "3e5830f1-79c4-47a6-9d83-fda9bf731fbc", 00:07:18.828 "is_configured": true, 00:07:18.828 "data_offset": 0, 00:07:18.828 "data_size": 65536 00:07:18.828 }, 00:07:18.828 { 00:07:18.828 "name": "BaseBdev2", 00:07:18.828 "uuid": "0448d5b1-e43e-4917-b727-5c6eb3d4ba64", 00:07:18.828 "is_configured": true, 00:07:18.828 "data_offset": 0, 00:07:18.828 "data_size": 65536 00:07:18.828 } 00:07:18.828 ] 00:07:18.828 } 00:07:18.828 } 00:07:18.828 }' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:18.828 BaseBdev2' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.828 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 [2024-10-25 17:48:37.339970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.101 [2024-10-25 17:48:37.340060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.101 [2024-10-25 17:48:37.340125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.101 "name": "Existed_Raid", 00:07:19.101 "uuid": "3cbd76cc-e843-4a69-a008-428c941e3236", 00:07:19.101 "strip_size_kb": 64, 00:07:19.101 "state": "offline", 00:07:19.101 "raid_level": "concat", 00:07:19.101 "superblock": false, 00:07:19.101 "num_base_bdevs": 2, 00:07:19.101 "num_base_bdevs_discovered": 1, 00:07:19.101 "num_base_bdevs_operational": 1, 00:07:19.101 "base_bdevs_list": [ 00:07:19.101 { 00:07:19.101 "name": null, 00:07:19.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.101 "is_configured": false, 00:07:19.101 "data_offset": 0, 00:07:19.101 "data_size": 65536 00:07:19.101 }, 00:07:19.101 { 00:07:19.101 "name": "BaseBdev2", 00:07:19.101 "uuid": "0448d5b1-e43e-4917-b727-5c6eb3d4ba64", 00:07:19.101 "is_configured": true, 00:07:19.101 "data_offset": 0, 00:07:19.101 "data_size": 65536 00:07:19.101 } 00:07:19.101 ] 00:07:19.101 }' 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.101 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.687 [2024-10-25 17:48:37.903217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.687 [2024-10-25 17:48:37.903272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.687 17:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61520 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61520 ']' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61520 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61520 00:07:19.687 killing process with pid 61520 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61520' 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61520 00:07:19.687 [2024-10-25 17:48:38.088627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.687 17:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61520 00:07:19.687 [2024-10-25 17:48:38.104933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.070 17:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.070 00:07:21.070 real 0m4.816s 00:07:21.070 user 0m6.964s 00:07:21.070 sys 0m0.802s 00:07:21.070 17:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 ************************************ 00:07:21.071 END TEST raid_state_function_test 00:07:21.071 ************************************ 00:07:21.071 17:48:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:21.071 17:48:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.071 17:48:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.071 17:48:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 ************************************ 00:07:21.071 START TEST raid_state_function_test_sb 00:07:21.071 ************************************ 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.071 Process raid pid: 61773 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61773 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61773' 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61773 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61773 ']' 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.071 17:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.071 [2024-10-25 17:48:39.319185] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:21.071 [2024-10-25 17:48:39.319415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.071 [2024-10-25 17:48:39.499855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.331 [2024-10-25 17:48:39.605516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.592 [2024-10-25 17:48:39.800426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.592 [2024-10-25 17:48:39.800541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.853 [2024-10-25 17:48:40.110255] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.853 [2024-10-25 17:48:40.110348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.853 [2024-10-25 17:48:40.110380] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.853 [2024-10-25 17:48:40.110403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.853 "name": "Existed_Raid", 00:07:21.853 "uuid": "c08ec898-b25c-4fa9-8029-4afd20136148", 00:07:21.853 "strip_size_kb": 64, 00:07:21.853 "state": "configuring", 00:07:21.853 "raid_level": "concat", 00:07:21.853 "superblock": true, 00:07:21.853 "num_base_bdevs": 2, 00:07:21.853 "num_base_bdevs_discovered": 0, 00:07:21.853 "num_base_bdevs_operational": 2, 00:07:21.853 "base_bdevs_list": [ 00:07:21.853 { 00:07:21.853 "name": "BaseBdev1", 00:07:21.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.853 "is_configured": false, 00:07:21.853 "data_offset": 0, 00:07:21.853 "data_size": 0 00:07:21.853 }, 00:07:21.853 { 00:07:21.853 "name": "BaseBdev2", 00:07:21.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.853 "is_configured": false, 00:07:21.853 "data_offset": 0, 00:07:21.853 "data_size": 0 00:07:21.853 } 00:07:21.853 ] 00:07:21.853 }' 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.853 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 [2024-10-25 17:48:40.557414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.425 [2024-10-25 17:48:40.557485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 [2024-10-25 17:48:40.569402] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.425 [2024-10-25 17:48:40.569478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.425 [2024-10-25 17:48:40.569505] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.425 [2024-10-25 17:48:40.569519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 [2024-10-25 17:48:40.614735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.425 BaseBdev1 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.425 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.425 [ 00:07:22.425 { 00:07:22.425 "name": "BaseBdev1", 00:07:22.425 "aliases": [ 00:07:22.425 "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb" 00:07:22.425 ], 00:07:22.425 "product_name": "Malloc disk", 00:07:22.425 "block_size": 512, 00:07:22.425 "num_blocks": 65536, 00:07:22.425 "uuid": "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb", 00:07:22.425 "assigned_rate_limits": { 00:07:22.425 "rw_ios_per_sec": 0, 00:07:22.425 "rw_mbytes_per_sec": 0, 00:07:22.425 "r_mbytes_per_sec": 0, 00:07:22.425 "w_mbytes_per_sec": 0 00:07:22.425 }, 00:07:22.425 "claimed": true, 00:07:22.425 "claim_type": "exclusive_write", 00:07:22.425 "zoned": false, 00:07:22.425 "supported_io_types": { 00:07:22.425 "read": true, 00:07:22.425 "write": true, 00:07:22.425 "unmap": true, 00:07:22.425 "flush": true, 00:07:22.425 "reset": true, 00:07:22.425 "nvme_admin": false, 00:07:22.425 "nvme_io": false, 00:07:22.425 "nvme_io_md": false, 00:07:22.425 "write_zeroes": true, 00:07:22.425 "zcopy": true, 00:07:22.425 "get_zone_info": false, 00:07:22.425 "zone_management": false, 00:07:22.425 "zone_append": false, 00:07:22.425 "compare": false, 00:07:22.425 "compare_and_write": false, 00:07:22.425 "abort": true, 00:07:22.426 "seek_hole": false, 00:07:22.426 "seek_data": false, 00:07:22.426 "copy": true, 00:07:22.426 "nvme_iov_md": false 00:07:22.426 }, 00:07:22.426 "memory_domains": [ 00:07:22.426 { 00:07:22.426 "dma_device_id": "system", 00:07:22.426 "dma_device_type": 1 00:07:22.426 }, 00:07:22.426 { 00:07:22.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.426 "dma_device_type": 2 00:07:22.426 } 00:07:22.426 ], 00:07:22.426 "driver_specific": {} 00:07:22.426 } 00:07:22.426 ] 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.426 "name": "Existed_Raid", 00:07:22.426 "uuid": "c55c7a28-4f55-4752-a562-e13a250cf3c0", 00:07:22.426 "strip_size_kb": 64, 00:07:22.426 "state": "configuring", 00:07:22.426 "raid_level": "concat", 00:07:22.426 "superblock": true, 00:07:22.426 "num_base_bdevs": 2, 00:07:22.426 "num_base_bdevs_discovered": 1, 00:07:22.426 "num_base_bdevs_operational": 2, 00:07:22.426 "base_bdevs_list": [ 00:07:22.426 { 00:07:22.426 "name": "BaseBdev1", 00:07:22.426 "uuid": "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb", 00:07:22.426 "is_configured": true, 00:07:22.426 "data_offset": 2048, 00:07:22.426 "data_size": 63488 00:07:22.426 }, 00:07:22.426 { 00:07:22.426 "name": "BaseBdev2", 00:07:22.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.426 "is_configured": false, 00:07:22.426 "data_offset": 0, 00:07:22.426 "data_size": 0 00:07:22.426 } 00:07:22.426 ] 00:07:22.426 }' 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.426 17:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 [2024-10-25 17:48:41.089927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.686 [2024-10-25 17:48:41.090005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 [2024-10-25 17:48:41.101969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.686 [2024-10-25 17:48:41.103658] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.686 [2024-10-25 17:48:41.103730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.686 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.946 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.946 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.946 "name": "Existed_Raid", 00:07:22.946 "uuid": "6f79e205-eece-438d-87b0-57e6458863a5", 00:07:22.946 "strip_size_kb": 64, 00:07:22.946 "state": "configuring", 00:07:22.946 "raid_level": "concat", 00:07:22.946 "superblock": true, 00:07:22.946 "num_base_bdevs": 2, 00:07:22.946 "num_base_bdevs_discovered": 1, 00:07:22.946 "num_base_bdevs_operational": 2, 00:07:22.946 "base_bdevs_list": [ 00:07:22.946 { 00:07:22.946 "name": "BaseBdev1", 00:07:22.946 "uuid": "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb", 00:07:22.946 "is_configured": true, 00:07:22.946 "data_offset": 2048, 00:07:22.946 "data_size": 63488 00:07:22.946 }, 00:07:22.946 { 00:07:22.946 "name": "BaseBdev2", 00:07:22.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.946 "is_configured": false, 00:07:22.946 "data_offset": 0, 00:07:22.946 "data_size": 0 00:07:22.946 } 00:07:22.946 ] 00:07:22.946 }' 00:07:22.946 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.946 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 [2024-10-25 17:48:41.581050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.206 [2024-10-25 17:48:41.581397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.206 BaseBdev2 00:07:23.206 [2024-10-25 17:48:41.581450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.206 [2024-10-25 17:48:41.581718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.206 [2024-10-25 17:48:41.581895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.206 [2024-10-25 17:48:41.581909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.206 [2024-10-25 17:48:41.582045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.206 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 [ 00:07:23.206 { 00:07:23.206 "name": "BaseBdev2", 00:07:23.206 "aliases": [ 00:07:23.206 "4390a71f-490c-4e51-a99f-fe7ac03323d3" 00:07:23.206 ], 00:07:23.206 "product_name": "Malloc disk", 00:07:23.207 "block_size": 512, 00:07:23.207 "num_blocks": 65536, 00:07:23.207 "uuid": "4390a71f-490c-4e51-a99f-fe7ac03323d3", 00:07:23.207 "assigned_rate_limits": { 00:07:23.207 "rw_ios_per_sec": 0, 00:07:23.207 "rw_mbytes_per_sec": 0, 00:07:23.207 "r_mbytes_per_sec": 0, 00:07:23.207 "w_mbytes_per_sec": 0 00:07:23.207 }, 00:07:23.207 "claimed": true, 00:07:23.207 "claim_type": "exclusive_write", 00:07:23.207 "zoned": false, 00:07:23.207 "supported_io_types": { 00:07:23.207 "read": true, 00:07:23.207 "write": true, 00:07:23.207 "unmap": true, 00:07:23.207 "flush": true, 00:07:23.207 "reset": true, 00:07:23.207 "nvme_admin": false, 00:07:23.207 "nvme_io": false, 00:07:23.207 "nvme_io_md": false, 00:07:23.207 "write_zeroes": true, 00:07:23.207 "zcopy": true, 00:07:23.207 "get_zone_info": false, 00:07:23.207 "zone_management": false, 00:07:23.207 "zone_append": false, 00:07:23.207 "compare": false, 00:07:23.207 "compare_and_write": false, 00:07:23.207 "abort": true, 00:07:23.207 "seek_hole": false, 00:07:23.207 "seek_data": false, 00:07:23.207 "copy": true, 00:07:23.207 "nvme_iov_md": false 00:07:23.207 }, 00:07:23.207 "memory_domains": [ 00:07:23.207 { 00:07:23.207 "dma_device_id": "system", 00:07:23.207 "dma_device_type": 1 00:07:23.207 }, 00:07:23.207 { 00:07:23.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.207 "dma_device_type": 2 00:07:23.207 } 00:07:23.207 ], 00:07:23.207 "driver_specific": {} 00:07:23.207 } 00:07:23.207 ] 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.207 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.466 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.466 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.466 "name": "Existed_Raid", 00:07:23.466 "uuid": "6f79e205-eece-438d-87b0-57e6458863a5", 00:07:23.466 "strip_size_kb": 64, 00:07:23.466 "state": "online", 00:07:23.466 "raid_level": "concat", 00:07:23.466 "superblock": true, 00:07:23.466 "num_base_bdevs": 2, 00:07:23.466 "num_base_bdevs_discovered": 2, 00:07:23.466 "num_base_bdevs_operational": 2, 00:07:23.466 "base_bdevs_list": [ 00:07:23.466 { 00:07:23.466 "name": "BaseBdev1", 00:07:23.466 "uuid": "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb", 00:07:23.466 "is_configured": true, 00:07:23.466 "data_offset": 2048, 00:07:23.466 "data_size": 63488 00:07:23.466 }, 00:07:23.466 { 00:07:23.466 "name": "BaseBdev2", 00:07:23.466 "uuid": "4390a71f-490c-4e51-a99f-fe7ac03323d3", 00:07:23.466 "is_configured": true, 00:07:23.466 "data_offset": 2048, 00:07:23.466 "data_size": 63488 00:07:23.466 } 00:07:23.466 ] 00:07:23.466 }' 00:07:23.466 17:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.466 17:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.725 [2024-10-25 17:48:42.080467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.725 "name": "Existed_Raid", 00:07:23.725 "aliases": [ 00:07:23.725 "6f79e205-eece-438d-87b0-57e6458863a5" 00:07:23.725 ], 00:07:23.725 "product_name": "Raid Volume", 00:07:23.725 "block_size": 512, 00:07:23.725 "num_blocks": 126976, 00:07:23.725 "uuid": "6f79e205-eece-438d-87b0-57e6458863a5", 00:07:23.725 "assigned_rate_limits": { 00:07:23.725 "rw_ios_per_sec": 0, 00:07:23.725 "rw_mbytes_per_sec": 0, 00:07:23.725 "r_mbytes_per_sec": 0, 00:07:23.725 "w_mbytes_per_sec": 0 00:07:23.725 }, 00:07:23.725 "claimed": false, 00:07:23.725 "zoned": false, 00:07:23.725 "supported_io_types": { 00:07:23.725 "read": true, 00:07:23.725 "write": true, 00:07:23.725 "unmap": true, 00:07:23.725 "flush": true, 00:07:23.725 "reset": true, 00:07:23.725 "nvme_admin": false, 00:07:23.725 "nvme_io": false, 00:07:23.725 "nvme_io_md": false, 00:07:23.725 "write_zeroes": true, 00:07:23.725 "zcopy": false, 00:07:23.725 "get_zone_info": false, 00:07:23.725 "zone_management": false, 00:07:23.725 "zone_append": false, 00:07:23.725 "compare": false, 00:07:23.725 "compare_and_write": false, 00:07:23.725 "abort": false, 00:07:23.725 "seek_hole": false, 00:07:23.725 "seek_data": false, 00:07:23.725 "copy": false, 00:07:23.725 "nvme_iov_md": false 00:07:23.725 }, 00:07:23.725 "memory_domains": [ 00:07:23.725 { 00:07:23.725 "dma_device_id": "system", 00:07:23.725 "dma_device_type": 1 00:07:23.725 }, 00:07:23.725 { 00:07:23.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.725 "dma_device_type": 2 00:07:23.725 }, 00:07:23.725 { 00:07:23.725 "dma_device_id": "system", 00:07:23.725 "dma_device_type": 1 00:07:23.725 }, 00:07:23.725 { 00:07:23.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.725 "dma_device_type": 2 00:07:23.725 } 00:07:23.725 ], 00:07:23.725 "driver_specific": { 00:07:23.725 "raid": { 00:07:23.725 "uuid": "6f79e205-eece-438d-87b0-57e6458863a5", 00:07:23.725 "strip_size_kb": 64, 00:07:23.725 "state": "online", 00:07:23.725 "raid_level": "concat", 00:07:23.725 "superblock": true, 00:07:23.725 "num_base_bdevs": 2, 00:07:23.725 "num_base_bdevs_discovered": 2, 00:07:23.725 "num_base_bdevs_operational": 2, 00:07:23.725 "base_bdevs_list": [ 00:07:23.725 { 00:07:23.725 "name": "BaseBdev1", 00:07:23.725 "uuid": "5a7b4d4a-26ee-454c-8e9b-d4250e92acfb", 00:07:23.725 "is_configured": true, 00:07:23.725 "data_offset": 2048, 00:07:23.725 "data_size": 63488 00:07:23.725 }, 00:07:23.725 { 00:07:23.725 "name": "BaseBdev2", 00:07:23.725 "uuid": "4390a71f-490c-4e51-a99f-fe7ac03323d3", 00:07:23.725 "is_configured": true, 00:07:23.725 "data_offset": 2048, 00:07:23.725 "data_size": 63488 00:07:23.725 } 00:07:23.725 ] 00:07:23.725 } 00:07:23.725 } 00:07:23.725 }' 00:07:23.725 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.985 BaseBdev2' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.985 [2024-10-25 17:48:42.295975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.985 [2024-10-25 17:48:42.296041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.985 [2024-10-25 17:48:42.296118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.985 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.244 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.244 "name": "Existed_Raid", 00:07:24.244 "uuid": "6f79e205-eece-438d-87b0-57e6458863a5", 00:07:24.244 "strip_size_kb": 64, 00:07:24.244 "state": "offline", 00:07:24.244 "raid_level": "concat", 00:07:24.244 "superblock": true, 00:07:24.244 "num_base_bdevs": 2, 00:07:24.244 "num_base_bdevs_discovered": 1, 00:07:24.244 "num_base_bdevs_operational": 1, 00:07:24.244 "base_bdevs_list": [ 00:07:24.244 { 00:07:24.244 "name": null, 00:07:24.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.244 "is_configured": false, 00:07:24.244 "data_offset": 0, 00:07:24.244 "data_size": 63488 00:07:24.244 }, 00:07:24.244 { 00:07:24.244 "name": "BaseBdev2", 00:07:24.244 "uuid": "4390a71f-490c-4e51-a99f-fe7ac03323d3", 00:07:24.244 "is_configured": true, 00:07:24.244 "data_offset": 2048, 00:07:24.244 "data_size": 63488 00:07:24.244 } 00:07:24.244 ] 00:07:24.244 }' 00:07:24.244 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.244 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.504 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.504 [2024-10-25 17:48:42.851705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.504 [2024-10-25 17:48:42.851796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:24.764 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61773 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61773 ']' 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61773 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.765 17:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61773 00:07:24.765 17:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.765 17:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.765 17:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61773' 00:07:24.765 killing process with pid 61773 00:07:24.765 17:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61773 00:07:24.765 [2024-10-25 17:48:43.010355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.765 17:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61773 00:07:24.765 [2024-10-25 17:48:43.026188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.704 17:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.704 ************************************ 00:07:25.704 END TEST raid_state_function_test_sb 00:07:25.704 ************************************ 00:07:25.704 00:07:25.704 real 0m4.856s 00:07:25.704 user 0m7.001s 00:07:25.704 sys 0m0.811s 00:07:25.704 17:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.704 17:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.704 17:48:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:25.704 17:48:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:25.704 17:48:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.704 17:48:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.964 ************************************ 00:07:25.964 START TEST raid_superblock_test 00:07:25.964 ************************************ 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62014 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62014 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62014 ']' 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.964 17:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.964 [2024-10-25 17:48:44.237607] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:25.964 [2024-10-25 17:48:44.237760] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:07:26.225 [2024-10-25 17:48:44.409126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.225 [2024-10-25 17:48:44.518932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.485 [2024-10-25 17:48:44.696095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.485 [2024-10-25 17:48:44.696226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.745 malloc1 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.745 [2024-10-25 17:48:45.110602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:26.745 [2024-10-25 17:48:45.110703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.745 [2024-10-25 17:48:45.110729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:26.745 [2024-10-25 17:48:45.110738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.745 [2024-10-25 17:48:45.112796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.745 [2024-10-25 17:48:45.112843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:26.745 pt1 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.745 malloc2 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.745 [2024-10-25 17:48:45.163354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.745 [2024-10-25 17:48:45.163443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.745 [2024-10-25 17:48:45.163479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:26.745 [2024-10-25 17:48:45.163506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.745 [2024-10-25 17:48:45.165545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.745 [2024-10-25 17:48:45.165613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.745 pt2 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:26.745 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.746 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.746 [2024-10-25 17:48:45.175391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:26.746 [2024-10-25 17:48:45.177144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.746 [2024-10-25 17:48:45.177347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:26.746 [2024-10-25 17:48:45.177391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.746 [2024-10-25 17:48:45.177637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.746 [2024-10-25 17:48:45.177810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:26.746 [2024-10-25 17:48:45.177884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:26.746 [2024-10-25 17:48:45.178062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.746 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.006 "name": "raid_bdev1", 00:07:27.006 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:27.006 "strip_size_kb": 64, 00:07:27.006 "state": "online", 00:07:27.006 "raid_level": "concat", 00:07:27.006 "superblock": true, 00:07:27.006 "num_base_bdevs": 2, 00:07:27.006 "num_base_bdevs_discovered": 2, 00:07:27.006 "num_base_bdevs_operational": 2, 00:07:27.006 "base_bdevs_list": [ 00:07:27.006 { 00:07:27.006 "name": "pt1", 00:07:27.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.006 "is_configured": true, 00:07:27.006 "data_offset": 2048, 00:07:27.006 "data_size": 63488 00:07:27.006 }, 00:07:27.006 { 00:07:27.006 "name": "pt2", 00:07:27.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.006 "is_configured": true, 00:07:27.006 "data_offset": 2048, 00:07:27.006 "data_size": 63488 00:07:27.006 } 00:07:27.006 ] 00:07:27.006 }' 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.006 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.266 [2024-10-25 17:48:45.654782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.266 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.266 "name": "raid_bdev1", 00:07:27.266 "aliases": [ 00:07:27.266 "5756db00-cfe7-4267-95ca-beba9639dfab" 00:07:27.266 ], 00:07:27.266 "product_name": "Raid Volume", 00:07:27.266 "block_size": 512, 00:07:27.266 "num_blocks": 126976, 00:07:27.266 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:27.266 "assigned_rate_limits": { 00:07:27.266 "rw_ios_per_sec": 0, 00:07:27.266 "rw_mbytes_per_sec": 0, 00:07:27.266 "r_mbytes_per_sec": 0, 00:07:27.266 "w_mbytes_per_sec": 0 00:07:27.266 }, 00:07:27.266 "claimed": false, 00:07:27.266 "zoned": false, 00:07:27.266 "supported_io_types": { 00:07:27.266 "read": true, 00:07:27.266 "write": true, 00:07:27.266 "unmap": true, 00:07:27.266 "flush": true, 00:07:27.266 "reset": true, 00:07:27.266 "nvme_admin": false, 00:07:27.266 "nvme_io": false, 00:07:27.266 "nvme_io_md": false, 00:07:27.266 "write_zeroes": true, 00:07:27.266 "zcopy": false, 00:07:27.266 "get_zone_info": false, 00:07:27.266 "zone_management": false, 00:07:27.266 "zone_append": false, 00:07:27.266 "compare": false, 00:07:27.266 "compare_and_write": false, 00:07:27.266 "abort": false, 00:07:27.266 "seek_hole": false, 00:07:27.266 "seek_data": false, 00:07:27.266 "copy": false, 00:07:27.266 "nvme_iov_md": false 00:07:27.266 }, 00:07:27.266 "memory_domains": [ 00:07:27.266 { 00:07:27.266 "dma_device_id": "system", 00:07:27.266 "dma_device_type": 1 00:07:27.266 }, 00:07:27.266 { 00:07:27.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.266 "dma_device_type": 2 00:07:27.266 }, 00:07:27.266 { 00:07:27.266 "dma_device_id": "system", 00:07:27.266 "dma_device_type": 1 00:07:27.266 }, 00:07:27.266 { 00:07:27.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.266 "dma_device_type": 2 00:07:27.267 } 00:07:27.267 ], 00:07:27.267 "driver_specific": { 00:07:27.267 "raid": { 00:07:27.267 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:27.267 "strip_size_kb": 64, 00:07:27.267 "state": "online", 00:07:27.267 "raid_level": "concat", 00:07:27.267 "superblock": true, 00:07:27.267 "num_base_bdevs": 2, 00:07:27.267 "num_base_bdevs_discovered": 2, 00:07:27.267 "num_base_bdevs_operational": 2, 00:07:27.267 "base_bdevs_list": [ 00:07:27.267 { 00:07:27.267 "name": "pt1", 00:07:27.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.267 "is_configured": true, 00:07:27.267 "data_offset": 2048, 00:07:27.267 "data_size": 63488 00:07:27.267 }, 00:07:27.267 { 00:07:27.267 "name": "pt2", 00:07:27.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.267 "is_configured": true, 00:07:27.267 "data_offset": 2048, 00:07:27.267 "data_size": 63488 00:07:27.267 } 00:07:27.267 ] 00:07:27.267 } 00:07:27.267 } 00:07:27.267 }' 00:07:27.267 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.526 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.526 pt2' 00:07:27.526 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.526 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.526 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 [2024-10-25 17:48:45.838423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5756db00-cfe7-4267-95ca-beba9639dfab 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5756db00-cfe7-4267-95ca-beba9639dfab ']' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 [2024-10-25 17:48:45.882090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.527 [2024-10-25 17:48:45.882112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.527 [2024-10-25 17:48:45.882181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.527 [2024-10-25 17:48:45.882224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.527 [2024-10-25 17:48:45.882234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.527 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:27.787 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 17:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:27.787 17:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 [2024-10-25 17:48:46.021963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:27.787 [2024-10-25 17:48:46.023728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:27.787 [2024-10-25 17:48:46.023794] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:27.787 [2024-10-25 17:48:46.023855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:27.787 [2024-10-25 17:48:46.023887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.787 [2024-10-25 17:48:46.023897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:27.787 request: 00:07:27.787 { 00:07:27.787 "name": "raid_bdev1", 00:07:27.787 "raid_level": "concat", 00:07:27.787 "base_bdevs": [ 00:07:27.787 "malloc1", 00:07:27.787 "malloc2" 00:07:27.787 ], 00:07:27.787 "strip_size_kb": 64, 00:07:27.787 "superblock": false, 00:07:27.787 "method": "bdev_raid_create", 00:07:27.787 "req_id": 1 00:07:27.787 } 00:07:27.787 Got JSON-RPC error response 00:07:27.787 response: 00:07:27.787 { 00:07:27.787 "code": -17, 00:07:27.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:27.787 } 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 [2024-10-25 17:48:46.085788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.787 [2024-10-25 17:48:46.085888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.787 [2024-10-25 17:48:46.085922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:27.787 [2024-10-25 17:48:46.085951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.787 [2024-10-25 17:48:46.088028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.787 [2024-10-25 17:48:46.088107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.787 [2024-10-25 17:48:46.088195] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.787 [2024-10-25 17:48:46.088267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.787 pt1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.787 "name": "raid_bdev1", 00:07:27.787 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:27.787 "strip_size_kb": 64, 00:07:27.787 "state": "configuring", 00:07:27.787 "raid_level": "concat", 00:07:27.787 "superblock": true, 00:07:27.787 "num_base_bdevs": 2, 00:07:27.787 "num_base_bdevs_discovered": 1, 00:07:27.787 "num_base_bdevs_operational": 2, 00:07:27.787 "base_bdevs_list": [ 00:07:27.787 { 00:07:27.787 "name": "pt1", 00:07:27.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.787 "is_configured": true, 00:07:27.787 "data_offset": 2048, 00:07:27.787 "data_size": 63488 00:07:27.787 }, 00:07:27.787 { 00:07:27.787 "name": null, 00:07:27.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.787 "is_configured": false, 00:07:27.787 "data_offset": 2048, 00:07:27.787 "data_size": 63488 00:07:27.787 } 00:07:27.787 ] 00:07:27.787 }' 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.787 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.357 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:28.357 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.358 [2024-10-25 17:48:46.513099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:28.358 [2024-10-25 17:48:46.513176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.358 [2024-10-25 17:48:46.513197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:28.358 [2024-10-25 17:48:46.513208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.358 [2024-10-25 17:48:46.513663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.358 [2024-10-25 17:48:46.513684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:28.358 [2024-10-25 17:48:46.513762] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:28.358 [2024-10-25 17:48:46.513787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.358 [2024-10-25 17:48:46.513921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.358 [2024-10-25 17:48:46.513934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.358 [2024-10-25 17:48:46.514161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:28.358 [2024-10-25 17:48:46.514323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.358 [2024-10-25 17:48:46.514333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.358 [2024-10-25 17:48:46.514463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.358 pt2 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.358 "name": "raid_bdev1", 00:07:28.358 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:28.358 "strip_size_kb": 64, 00:07:28.358 "state": "online", 00:07:28.358 "raid_level": "concat", 00:07:28.358 "superblock": true, 00:07:28.358 "num_base_bdevs": 2, 00:07:28.358 "num_base_bdevs_discovered": 2, 00:07:28.358 "num_base_bdevs_operational": 2, 00:07:28.358 "base_bdevs_list": [ 00:07:28.358 { 00:07:28.358 "name": "pt1", 00:07:28.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.358 "is_configured": true, 00:07:28.358 "data_offset": 2048, 00:07:28.358 "data_size": 63488 00:07:28.358 }, 00:07:28.358 { 00:07:28.358 "name": "pt2", 00:07:28.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.358 "is_configured": true, 00:07:28.358 "data_offset": 2048, 00:07:28.358 "data_size": 63488 00:07:28.358 } 00:07:28.358 ] 00:07:28.358 }' 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.358 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.618 [2024-10-25 17:48:46.932556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.618 "name": "raid_bdev1", 00:07:28.618 "aliases": [ 00:07:28.618 "5756db00-cfe7-4267-95ca-beba9639dfab" 00:07:28.618 ], 00:07:28.618 "product_name": "Raid Volume", 00:07:28.618 "block_size": 512, 00:07:28.618 "num_blocks": 126976, 00:07:28.618 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:28.618 "assigned_rate_limits": { 00:07:28.618 "rw_ios_per_sec": 0, 00:07:28.618 "rw_mbytes_per_sec": 0, 00:07:28.618 "r_mbytes_per_sec": 0, 00:07:28.618 "w_mbytes_per_sec": 0 00:07:28.618 }, 00:07:28.618 "claimed": false, 00:07:28.618 "zoned": false, 00:07:28.618 "supported_io_types": { 00:07:28.618 "read": true, 00:07:28.618 "write": true, 00:07:28.618 "unmap": true, 00:07:28.618 "flush": true, 00:07:28.618 "reset": true, 00:07:28.618 "nvme_admin": false, 00:07:28.618 "nvme_io": false, 00:07:28.618 "nvme_io_md": false, 00:07:28.618 "write_zeroes": true, 00:07:28.618 "zcopy": false, 00:07:28.618 "get_zone_info": false, 00:07:28.618 "zone_management": false, 00:07:28.618 "zone_append": false, 00:07:28.618 "compare": false, 00:07:28.618 "compare_and_write": false, 00:07:28.618 "abort": false, 00:07:28.618 "seek_hole": false, 00:07:28.618 "seek_data": false, 00:07:28.618 "copy": false, 00:07:28.618 "nvme_iov_md": false 00:07:28.618 }, 00:07:28.618 "memory_domains": [ 00:07:28.618 { 00:07:28.618 "dma_device_id": "system", 00:07:28.618 "dma_device_type": 1 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.618 "dma_device_type": 2 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "dma_device_id": "system", 00:07:28.618 "dma_device_type": 1 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.618 "dma_device_type": 2 00:07:28.618 } 00:07:28.618 ], 00:07:28.618 "driver_specific": { 00:07:28.618 "raid": { 00:07:28.618 "uuid": "5756db00-cfe7-4267-95ca-beba9639dfab", 00:07:28.618 "strip_size_kb": 64, 00:07:28.618 "state": "online", 00:07:28.618 "raid_level": "concat", 00:07:28.618 "superblock": true, 00:07:28.618 "num_base_bdevs": 2, 00:07:28.618 "num_base_bdevs_discovered": 2, 00:07:28.618 "num_base_bdevs_operational": 2, 00:07:28.618 "base_bdevs_list": [ 00:07:28.618 { 00:07:28.618 "name": "pt1", 00:07:28.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.618 "is_configured": true, 00:07:28.618 "data_offset": 2048, 00:07:28.618 "data_size": 63488 00:07:28.618 }, 00:07:28.618 { 00:07:28.618 "name": "pt2", 00:07:28.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.618 "is_configured": true, 00:07:28.618 "data_offset": 2048, 00:07:28.618 "data_size": 63488 00:07:28.618 } 00:07:28.618 ] 00:07:28.618 } 00:07:28.618 } 00:07:28.618 }' 00:07:28.618 17:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.618 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.618 pt2' 00:07:28.618 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.618 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.618 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.878 [2024-10-25 17:48:47.160148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5756db00-cfe7-4267-95ca-beba9639dfab '!=' 5756db00-cfe7-4267-95ca-beba9639dfab ']' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62014 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62014 ']' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62014 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62014 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62014' 00:07:28.878 killing process with pid 62014 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62014 00:07:28.878 [2024-10-25 17:48:47.229371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.878 [2024-10-25 17:48:47.229498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.878 [2024-10-25 17:48:47.229569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.878 17:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62014 00:07:28.878 [2024-10-25 17:48:47.229620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:29.138 [2024-10-25 17:48:47.431264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.079 17:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:30.079 00:07:30.079 real 0m4.359s 00:07:30.079 user 0m6.101s 00:07:30.079 sys 0m0.751s 00:07:30.079 17:48:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.079 17:48:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.079 ************************************ 00:07:30.079 END TEST raid_superblock_test 00:07:30.079 ************************************ 00:07:30.339 17:48:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:30.339 17:48:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.339 17:48:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.339 17:48:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 ************************************ 00:07:30.339 START TEST raid_read_error_test 00:07:30.339 ************************************ 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l27cpySarn 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62231 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62231 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62231 ']' 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.339 17:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 [2024-10-25 17:48:48.679670] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:30.339 [2024-10-25 17:48:48.679853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62231 ] 00:07:30.599 [2024-10-25 17:48:48.852510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.599 [2024-10-25 17:48:48.964375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.860 [2024-10-25 17:48:49.154211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.860 [2024-10-25 17:48:49.154332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.119 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 BaseBdev1_malloc 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 true 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 [2024-10-25 17:48:49.587467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:31.379 [2024-10-25 17:48:49.587570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.379 [2024-10-25 17:48:49.587609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:31.379 [2024-10-25 17:48:49.587640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.379 [2024-10-25 17:48:49.589711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.379 [2024-10-25 17:48:49.589792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:31.379 BaseBdev1 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 BaseBdev2_malloc 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 true 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 [2024-10-25 17:48:49.645326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:31.379 [2024-10-25 17:48:49.645424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.379 [2024-10-25 17:48:49.645458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.379 [2024-10-25 17:48:49.645489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.379 [2024-10-25 17:48:49.647507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.379 [2024-10-25 17:48:49.647579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:31.379 BaseBdev2 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:31.379 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.380 [2024-10-25 17:48:49.657386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.380 [2024-10-25 17:48:49.659184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.380 [2024-10-25 17:48:49.659407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.380 [2024-10-25 17:48:49.659455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.380 [2024-10-25 17:48:49.659691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.380 [2024-10-25 17:48:49.659912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.380 [2024-10-25 17:48:49.659957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.380 [2024-10-25 17:48:49.660153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.380 "name": "raid_bdev1", 00:07:31.380 "uuid": "dbfcf8bc-f489-47fb-afc4-8dc43b3de3f4", 00:07:31.380 "strip_size_kb": 64, 00:07:31.380 "state": "online", 00:07:31.380 "raid_level": "concat", 00:07:31.380 "superblock": true, 00:07:31.380 "num_base_bdevs": 2, 00:07:31.380 "num_base_bdevs_discovered": 2, 00:07:31.380 "num_base_bdevs_operational": 2, 00:07:31.380 "base_bdevs_list": [ 00:07:31.380 { 00:07:31.380 "name": "BaseBdev1", 00:07:31.380 "uuid": "2a40aa14-1a60-5ea3-8d71-01dbfbf7a2d0", 00:07:31.380 "is_configured": true, 00:07:31.380 "data_offset": 2048, 00:07:31.380 "data_size": 63488 00:07:31.380 }, 00:07:31.380 { 00:07:31.380 "name": "BaseBdev2", 00:07:31.380 "uuid": "fad882bf-0121-5561-8fca-ecd5140823d8", 00:07:31.380 "is_configured": true, 00:07:31.380 "data_offset": 2048, 00:07:31.380 "data_size": 63488 00:07:31.380 } 00:07:31.380 ] 00:07:31.380 }' 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.380 17:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 17:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:31.639 17:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:31.898 [2024-10-25 17:48:50.161740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.839 "name": "raid_bdev1", 00:07:32.839 "uuid": "dbfcf8bc-f489-47fb-afc4-8dc43b3de3f4", 00:07:32.839 "strip_size_kb": 64, 00:07:32.839 "state": "online", 00:07:32.839 "raid_level": "concat", 00:07:32.839 "superblock": true, 00:07:32.839 "num_base_bdevs": 2, 00:07:32.839 "num_base_bdevs_discovered": 2, 00:07:32.839 "num_base_bdevs_operational": 2, 00:07:32.839 "base_bdevs_list": [ 00:07:32.839 { 00:07:32.839 "name": "BaseBdev1", 00:07:32.839 "uuid": "2a40aa14-1a60-5ea3-8d71-01dbfbf7a2d0", 00:07:32.839 "is_configured": true, 00:07:32.839 "data_offset": 2048, 00:07:32.839 "data_size": 63488 00:07:32.839 }, 00:07:32.839 { 00:07:32.839 "name": "BaseBdev2", 00:07:32.839 "uuid": "fad882bf-0121-5561-8fca-ecd5140823d8", 00:07:32.839 "is_configured": true, 00:07:32.839 "data_offset": 2048, 00:07:32.839 "data_size": 63488 00:07:32.839 } 00:07:32.839 ] 00:07:32.839 }' 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.839 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.099 [2024-10-25 17:48:51.519863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.099 [2024-10-25 17:48:51.519962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.099 [2024-10-25 17:48:51.522700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.099 [2024-10-25 17:48:51.522749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.099 [2024-10-25 17:48:51.522780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.099 [2024-10-25 17:48:51.522794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.099 { 00:07:33.099 "results": [ 00:07:33.099 { 00:07:33.099 "job": "raid_bdev1", 00:07:33.099 "core_mask": "0x1", 00:07:33.099 "workload": "randrw", 00:07:33.099 "percentage": 50, 00:07:33.099 "status": "finished", 00:07:33.099 "queue_depth": 1, 00:07:33.099 "io_size": 131072, 00:07:33.099 "runtime": 1.359156, 00:07:33.099 "iops": 16904.60844818402, 00:07:33.099 "mibps": 2113.0760560230024, 00:07:33.099 "io_failed": 1, 00:07:33.099 "io_timeout": 0, 00:07:33.099 "avg_latency_us": 82.17763751980573, 00:07:33.099 "min_latency_us": 25.041048034934498, 00:07:33.099 "max_latency_us": 1402.2986899563318 00:07:33.099 } 00:07:33.099 ], 00:07:33.099 "core_count": 1 00:07:33.099 } 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62231 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62231 ']' 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62231 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:33.099 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62231 00:07:33.360 killing process with pid 62231 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62231' 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62231 00:07:33.360 [2024-10-25 17:48:51.568557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.360 17:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62231 00:07:33.360 [2024-10-25 17:48:51.701552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l27cpySarn 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:34.769 ************************************ 00:07:34.769 END TEST raid_read_error_test 00:07:34.769 ************************************ 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:34.769 00:07:34.769 real 0m4.205s 00:07:34.769 user 0m5.039s 00:07:34.769 sys 0m0.541s 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.769 17:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.769 17:48:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:34.769 17:48:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.769 17:48:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.769 17:48:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.769 ************************************ 00:07:34.769 START TEST raid_write_error_test 00:07:34.769 ************************************ 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FryUrB45TL 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62371 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62371 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62371 ']' 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.769 17:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.769 [2024-10-25 17:48:52.946642] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:34.769 [2024-10-25 17:48:52.946808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62371 ] 00:07:34.769 [2024-10-25 17:48:53.118870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.029 [2024-10-25 17:48:53.222287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.029 [2024-10-25 17:48:53.411446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.029 [2024-10-25 17:48:53.411577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 BaseBdev1_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 true 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 [2024-10-25 17:48:53.840263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.599 [2024-10-25 17:48:53.840357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.599 [2024-10-25 17:48:53.840393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.599 [2024-10-25 17:48:53.840423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.599 [2024-10-25 17:48:53.842430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.599 [2024-10-25 17:48:53.842505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.599 BaseBdev1 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 BaseBdev2_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 true 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 [2024-10-25 17:48:53.905493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.599 [2024-10-25 17:48:53.905545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.599 [2024-10-25 17:48:53.905560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.599 [2024-10-25 17:48:53.905570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.599 [2024-10-25 17:48:53.907562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.599 [2024-10-25 17:48:53.907603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.599 BaseBdev2 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.599 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 [2024-10-25 17:48:53.913560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.599 [2024-10-25 17:48:53.915312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.599 [2024-10-25 17:48:53.915498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.599 [2024-10-25 17:48:53.915513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.600 [2024-10-25 17:48:53.915720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:35.600 [2024-10-25 17:48:53.915905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.600 [2024-10-25 17:48:53.915918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.600 [2024-10-25 17:48:53.916100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.600 "name": "raid_bdev1", 00:07:35.600 "uuid": "a9efb51e-2d84-4f71-a722-fe5d33167ca2", 00:07:35.600 "strip_size_kb": 64, 00:07:35.600 "state": "online", 00:07:35.600 "raid_level": "concat", 00:07:35.600 "superblock": true, 00:07:35.600 "num_base_bdevs": 2, 00:07:35.600 "num_base_bdevs_discovered": 2, 00:07:35.600 "num_base_bdevs_operational": 2, 00:07:35.600 "base_bdevs_list": [ 00:07:35.600 { 00:07:35.600 "name": "BaseBdev1", 00:07:35.600 "uuid": "1c211bae-5389-5aae-9112-e5b52dc60a57", 00:07:35.600 "is_configured": true, 00:07:35.600 "data_offset": 2048, 00:07:35.600 "data_size": 63488 00:07:35.600 }, 00:07:35.600 { 00:07:35.600 "name": "BaseBdev2", 00:07:35.600 "uuid": "23262b28-10b4-553f-a72c-da1983b17bc7", 00:07:35.600 "is_configured": true, 00:07:35.600 "data_offset": 2048, 00:07:35.600 "data_size": 63488 00:07:35.600 } 00:07:35.600 ] 00:07:35.600 }' 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.600 17:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.170 17:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.170 17:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.170 [2024-10-25 17:48:54.433905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.109 "name": "raid_bdev1", 00:07:37.109 "uuid": "a9efb51e-2d84-4f71-a722-fe5d33167ca2", 00:07:37.109 "strip_size_kb": 64, 00:07:37.109 "state": "online", 00:07:37.109 "raid_level": "concat", 00:07:37.109 "superblock": true, 00:07:37.109 "num_base_bdevs": 2, 00:07:37.109 "num_base_bdevs_discovered": 2, 00:07:37.109 "num_base_bdevs_operational": 2, 00:07:37.109 "base_bdevs_list": [ 00:07:37.109 { 00:07:37.109 "name": "BaseBdev1", 00:07:37.109 "uuid": "1c211bae-5389-5aae-9112-e5b52dc60a57", 00:07:37.109 "is_configured": true, 00:07:37.109 "data_offset": 2048, 00:07:37.109 "data_size": 63488 00:07:37.109 }, 00:07:37.109 { 00:07:37.109 "name": "BaseBdev2", 00:07:37.109 "uuid": "23262b28-10b4-553f-a72c-da1983b17bc7", 00:07:37.109 "is_configured": true, 00:07:37.109 "data_offset": 2048, 00:07:37.109 "data_size": 63488 00:07:37.109 } 00:07:37.109 ] 00:07:37.109 }' 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.109 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.370 [2024-10-25 17:48:55.787532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.370 [2024-10-25 17:48:55.787569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.370 [2024-10-25 17:48:55.790117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.370 [2024-10-25 17:48:55.790161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.370 [2024-10-25 17:48:55.790192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.370 [2024-10-25 17:48:55.790205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.370 { 00:07:37.370 "results": [ 00:07:37.370 { 00:07:37.370 "job": "raid_bdev1", 00:07:37.370 "core_mask": "0x1", 00:07:37.370 "workload": "randrw", 00:07:37.370 "percentage": 50, 00:07:37.370 "status": "finished", 00:07:37.370 "queue_depth": 1, 00:07:37.370 "io_size": 131072, 00:07:37.370 "runtime": 1.354493, 00:07:37.370 "iops": 17553.431431539328, 00:07:37.370 "mibps": 2194.178928942416, 00:07:37.370 "io_failed": 1, 00:07:37.370 "io_timeout": 0, 00:07:37.370 "avg_latency_us": 78.91932995318766, 00:07:37.370 "min_latency_us": 24.370305676855896, 00:07:37.370 "max_latency_us": 1352.216593886463 00:07:37.370 } 00:07:37.370 ], 00:07:37.370 "core_count": 1 00:07:37.370 } 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62371 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62371 ']' 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62371 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.370 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62371 00:07:37.631 killing process with pid 62371 00:07:37.631 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.631 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.631 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62371' 00:07:37.631 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62371 00:07:37.631 [2024-10-25 17:48:55.832046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.631 17:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62371 00:07:37.631 [2024-10-25 17:48:55.955855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FryUrB45TL 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:39.014 00:07:39.014 real 0m4.209s 00:07:39.014 user 0m5.026s 00:07:39.014 sys 0m0.545s 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.014 17:48:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.014 ************************************ 00:07:39.014 END TEST raid_write_error_test 00:07:39.014 ************************************ 00:07:39.014 17:48:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:39.014 17:48:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:39.014 17:48:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:39.014 17:48:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.014 17:48:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.014 ************************************ 00:07:39.014 START TEST raid_state_function_test 00:07:39.014 ************************************ 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62509 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62509' 00:07:39.014 Process raid pid: 62509 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62509 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62509 ']' 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.014 17:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.014 [2024-10-25 17:48:57.224132] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:39.014 [2024-10-25 17:48:57.224317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.014 [2024-10-25 17:48:57.396254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.275 [2024-10-25 17:48:57.504801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.275 [2024-10-25 17:48:57.703075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.275 [2024-10-25 17:48:57.703105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.846 [2024-10-25 17:48:58.041356] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.846 [2024-10-25 17:48:58.041409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.846 [2024-10-25 17:48:58.041419] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.846 [2024-10-25 17:48:58.041428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.846 "name": "Existed_Raid", 00:07:39.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.846 "strip_size_kb": 0, 00:07:39.846 "state": "configuring", 00:07:39.846 "raid_level": "raid1", 00:07:39.846 "superblock": false, 00:07:39.846 "num_base_bdevs": 2, 00:07:39.846 "num_base_bdevs_discovered": 0, 00:07:39.846 "num_base_bdevs_operational": 2, 00:07:39.846 "base_bdevs_list": [ 00:07:39.846 { 00:07:39.846 "name": "BaseBdev1", 00:07:39.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.846 "is_configured": false, 00:07:39.846 "data_offset": 0, 00:07:39.846 "data_size": 0 00:07:39.846 }, 00:07:39.846 { 00:07:39.846 "name": "BaseBdev2", 00:07:39.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.846 "is_configured": false, 00:07:39.846 "data_offset": 0, 00:07:39.846 "data_size": 0 00:07:39.846 } 00:07:39.846 ] 00:07:39.846 }' 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.846 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 [2024-10-25 17:48:58.484590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.107 [2024-10-25 17:48:58.484670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 [2024-10-25 17:48:58.492568] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.107 [2024-10-25 17:48:58.492649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.107 [2024-10-25 17:48:58.492677] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.107 [2024-10-25 17:48:58.492702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 [2024-10-25 17:48:58.535507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.107 BaseBdev1 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.107 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.367 [ 00:07:40.367 { 00:07:40.367 "name": "BaseBdev1", 00:07:40.367 "aliases": [ 00:07:40.367 "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9" 00:07:40.367 ], 00:07:40.367 "product_name": "Malloc disk", 00:07:40.367 "block_size": 512, 00:07:40.367 "num_blocks": 65536, 00:07:40.367 "uuid": "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9", 00:07:40.367 "assigned_rate_limits": { 00:07:40.367 "rw_ios_per_sec": 0, 00:07:40.367 "rw_mbytes_per_sec": 0, 00:07:40.367 "r_mbytes_per_sec": 0, 00:07:40.367 "w_mbytes_per_sec": 0 00:07:40.367 }, 00:07:40.367 "claimed": true, 00:07:40.367 "claim_type": "exclusive_write", 00:07:40.367 "zoned": false, 00:07:40.367 "supported_io_types": { 00:07:40.367 "read": true, 00:07:40.367 "write": true, 00:07:40.367 "unmap": true, 00:07:40.367 "flush": true, 00:07:40.367 "reset": true, 00:07:40.367 "nvme_admin": false, 00:07:40.367 "nvme_io": false, 00:07:40.367 "nvme_io_md": false, 00:07:40.367 "write_zeroes": true, 00:07:40.367 "zcopy": true, 00:07:40.367 "get_zone_info": false, 00:07:40.367 "zone_management": false, 00:07:40.367 "zone_append": false, 00:07:40.367 "compare": false, 00:07:40.367 "compare_and_write": false, 00:07:40.367 "abort": true, 00:07:40.367 "seek_hole": false, 00:07:40.367 "seek_data": false, 00:07:40.367 "copy": true, 00:07:40.367 "nvme_iov_md": false 00:07:40.367 }, 00:07:40.367 "memory_domains": [ 00:07:40.367 { 00:07:40.367 "dma_device_id": "system", 00:07:40.367 "dma_device_type": 1 00:07:40.367 }, 00:07:40.367 { 00:07:40.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.367 "dma_device_type": 2 00:07:40.367 } 00:07:40.367 ], 00:07:40.367 "driver_specific": {} 00:07:40.367 } 00:07:40.367 ] 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.367 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.368 "name": "Existed_Raid", 00:07:40.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.368 "strip_size_kb": 0, 00:07:40.368 "state": "configuring", 00:07:40.368 "raid_level": "raid1", 00:07:40.368 "superblock": false, 00:07:40.368 "num_base_bdevs": 2, 00:07:40.368 "num_base_bdevs_discovered": 1, 00:07:40.368 "num_base_bdevs_operational": 2, 00:07:40.368 "base_bdevs_list": [ 00:07:40.368 { 00:07:40.368 "name": "BaseBdev1", 00:07:40.368 "uuid": "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9", 00:07:40.368 "is_configured": true, 00:07:40.368 "data_offset": 0, 00:07:40.368 "data_size": 65536 00:07:40.368 }, 00:07:40.368 { 00:07:40.368 "name": "BaseBdev2", 00:07:40.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.368 "is_configured": false, 00:07:40.368 "data_offset": 0, 00:07:40.368 "data_size": 0 00:07:40.368 } 00:07:40.368 ] 00:07:40.368 }' 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.368 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.628 [2024-10-25 17:48:58.986817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.628 [2024-10-25 17:48:58.986954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.628 17:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.628 [2024-10-25 17:48:58.998863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.628 [2024-10-25 17:48:59.000641] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.628 [2024-10-25 17:48:59.000685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.628 "name": "Existed_Raid", 00:07:40.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.628 "strip_size_kb": 0, 00:07:40.628 "state": "configuring", 00:07:40.628 "raid_level": "raid1", 00:07:40.628 "superblock": false, 00:07:40.628 "num_base_bdevs": 2, 00:07:40.628 "num_base_bdevs_discovered": 1, 00:07:40.628 "num_base_bdevs_operational": 2, 00:07:40.628 "base_bdevs_list": [ 00:07:40.628 { 00:07:40.628 "name": "BaseBdev1", 00:07:40.628 "uuid": "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9", 00:07:40.628 "is_configured": true, 00:07:40.628 "data_offset": 0, 00:07:40.628 "data_size": 65536 00:07:40.628 }, 00:07:40.628 { 00:07:40.628 "name": "BaseBdev2", 00:07:40.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.628 "is_configured": false, 00:07:40.628 "data_offset": 0, 00:07:40.628 "data_size": 0 00:07:40.628 } 00:07:40.628 ] 00:07:40.628 }' 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.628 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 [2024-10-25 17:48:59.468819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.202 [2024-10-25 17:48:59.468955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.202 [2024-10-25 17:48:59.468982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:41.202 [2024-10-25 17:48:59.469274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.202 [2024-10-25 17:48:59.469470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.202 [2024-10-25 17:48:59.469517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:41.202 [2024-10-25 17:48:59.469823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.202 BaseBdev2 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.202 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 [ 00:07:41.202 { 00:07:41.202 "name": "BaseBdev2", 00:07:41.202 "aliases": [ 00:07:41.202 "390483ed-f5c4-4ab4-9405-6da887cf61b9" 00:07:41.202 ], 00:07:41.202 "product_name": "Malloc disk", 00:07:41.202 "block_size": 512, 00:07:41.202 "num_blocks": 65536, 00:07:41.202 "uuid": "390483ed-f5c4-4ab4-9405-6da887cf61b9", 00:07:41.202 "assigned_rate_limits": { 00:07:41.202 "rw_ios_per_sec": 0, 00:07:41.202 "rw_mbytes_per_sec": 0, 00:07:41.202 "r_mbytes_per_sec": 0, 00:07:41.202 "w_mbytes_per_sec": 0 00:07:41.202 }, 00:07:41.202 "claimed": true, 00:07:41.202 "claim_type": "exclusive_write", 00:07:41.202 "zoned": false, 00:07:41.202 "supported_io_types": { 00:07:41.202 "read": true, 00:07:41.202 "write": true, 00:07:41.202 "unmap": true, 00:07:41.203 "flush": true, 00:07:41.203 "reset": true, 00:07:41.203 "nvme_admin": false, 00:07:41.203 "nvme_io": false, 00:07:41.203 "nvme_io_md": false, 00:07:41.203 "write_zeroes": true, 00:07:41.203 "zcopy": true, 00:07:41.203 "get_zone_info": false, 00:07:41.203 "zone_management": false, 00:07:41.203 "zone_append": false, 00:07:41.203 "compare": false, 00:07:41.203 "compare_and_write": false, 00:07:41.203 "abort": true, 00:07:41.203 "seek_hole": false, 00:07:41.203 "seek_data": false, 00:07:41.203 "copy": true, 00:07:41.203 "nvme_iov_md": false 00:07:41.203 }, 00:07:41.203 "memory_domains": [ 00:07:41.203 { 00:07:41.203 "dma_device_id": "system", 00:07:41.203 "dma_device_type": 1 00:07:41.203 }, 00:07:41.203 { 00:07:41.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.203 "dma_device_type": 2 00:07:41.203 } 00:07:41.203 ], 00:07:41.203 "driver_specific": {} 00:07:41.203 } 00:07:41.203 ] 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.203 "name": "Existed_Raid", 00:07:41.203 "uuid": "ffda3597-beef-4526-9cfa-895f07692690", 00:07:41.203 "strip_size_kb": 0, 00:07:41.203 "state": "online", 00:07:41.203 "raid_level": "raid1", 00:07:41.203 "superblock": false, 00:07:41.203 "num_base_bdevs": 2, 00:07:41.203 "num_base_bdevs_discovered": 2, 00:07:41.203 "num_base_bdevs_operational": 2, 00:07:41.203 "base_bdevs_list": [ 00:07:41.203 { 00:07:41.203 "name": "BaseBdev1", 00:07:41.203 "uuid": "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9", 00:07:41.203 "is_configured": true, 00:07:41.203 "data_offset": 0, 00:07:41.203 "data_size": 65536 00:07:41.203 }, 00:07:41.203 { 00:07:41.203 "name": "BaseBdev2", 00:07:41.203 "uuid": "390483ed-f5c4-4ab4-9405-6da887cf61b9", 00:07:41.203 "is_configured": true, 00:07:41.203 "data_offset": 0, 00:07:41.203 "data_size": 65536 00:07:41.203 } 00:07:41.203 ] 00:07:41.203 }' 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.203 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.463 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.723 [2024-10-25 17:48:59.900440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.723 17:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.723 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.723 "name": "Existed_Raid", 00:07:41.723 "aliases": [ 00:07:41.723 "ffda3597-beef-4526-9cfa-895f07692690" 00:07:41.723 ], 00:07:41.723 "product_name": "Raid Volume", 00:07:41.723 "block_size": 512, 00:07:41.723 "num_blocks": 65536, 00:07:41.723 "uuid": "ffda3597-beef-4526-9cfa-895f07692690", 00:07:41.723 "assigned_rate_limits": { 00:07:41.723 "rw_ios_per_sec": 0, 00:07:41.723 "rw_mbytes_per_sec": 0, 00:07:41.723 "r_mbytes_per_sec": 0, 00:07:41.723 "w_mbytes_per_sec": 0 00:07:41.723 }, 00:07:41.723 "claimed": false, 00:07:41.723 "zoned": false, 00:07:41.723 "supported_io_types": { 00:07:41.723 "read": true, 00:07:41.723 "write": true, 00:07:41.723 "unmap": false, 00:07:41.723 "flush": false, 00:07:41.723 "reset": true, 00:07:41.723 "nvme_admin": false, 00:07:41.723 "nvme_io": false, 00:07:41.723 "nvme_io_md": false, 00:07:41.723 "write_zeroes": true, 00:07:41.723 "zcopy": false, 00:07:41.723 "get_zone_info": false, 00:07:41.723 "zone_management": false, 00:07:41.723 "zone_append": false, 00:07:41.723 "compare": false, 00:07:41.723 "compare_and_write": false, 00:07:41.723 "abort": false, 00:07:41.723 "seek_hole": false, 00:07:41.723 "seek_data": false, 00:07:41.723 "copy": false, 00:07:41.723 "nvme_iov_md": false 00:07:41.723 }, 00:07:41.723 "memory_domains": [ 00:07:41.723 { 00:07:41.723 "dma_device_id": "system", 00:07:41.723 "dma_device_type": 1 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.723 "dma_device_type": 2 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "dma_device_id": "system", 00:07:41.723 "dma_device_type": 1 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.723 "dma_device_type": 2 00:07:41.723 } 00:07:41.723 ], 00:07:41.723 "driver_specific": { 00:07:41.723 "raid": { 00:07:41.723 "uuid": "ffda3597-beef-4526-9cfa-895f07692690", 00:07:41.723 "strip_size_kb": 0, 00:07:41.723 "state": "online", 00:07:41.723 "raid_level": "raid1", 00:07:41.723 "superblock": false, 00:07:41.723 "num_base_bdevs": 2, 00:07:41.723 "num_base_bdevs_discovered": 2, 00:07:41.723 "num_base_bdevs_operational": 2, 00:07:41.723 "base_bdevs_list": [ 00:07:41.723 { 00:07:41.723 "name": "BaseBdev1", 00:07:41.723 "uuid": "9498eaf1-62a6-44d6-b4e9-e8ba9c6ef1a9", 00:07:41.723 "is_configured": true, 00:07:41.723 "data_offset": 0, 00:07:41.723 "data_size": 65536 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "name": "BaseBdev2", 00:07:41.723 "uuid": "390483ed-f5c4-4ab4-9405-6da887cf61b9", 00:07:41.723 "is_configured": true, 00:07:41.723 "data_offset": 0, 00:07:41.723 "data_size": 65536 00:07:41.723 } 00:07:41.723 ] 00:07:41.723 } 00:07:41.723 } 00:07:41.723 }' 00:07:41.723 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.723 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:41.723 BaseBdev2' 00:07:41.723 17:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.723 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.723 [2024-10-25 17:49:00.135917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.983 "name": "Existed_Raid", 00:07:41.983 "uuid": "ffda3597-beef-4526-9cfa-895f07692690", 00:07:41.983 "strip_size_kb": 0, 00:07:41.983 "state": "online", 00:07:41.983 "raid_level": "raid1", 00:07:41.983 "superblock": false, 00:07:41.983 "num_base_bdevs": 2, 00:07:41.983 "num_base_bdevs_discovered": 1, 00:07:41.983 "num_base_bdevs_operational": 1, 00:07:41.983 "base_bdevs_list": [ 00:07:41.983 { 00:07:41.983 "name": null, 00:07:41.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.983 "is_configured": false, 00:07:41.983 "data_offset": 0, 00:07:41.983 "data_size": 65536 00:07:41.983 }, 00:07:41.983 { 00:07:41.983 "name": "BaseBdev2", 00:07:41.983 "uuid": "390483ed-f5c4-4ab4-9405-6da887cf61b9", 00:07:41.983 "is_configured": true, 00:07:41.983 "data_offset": 0, 00:07:41.983 "data_size": 65536 00:07:41.983 } 00:07:41.983 ] 00:07:41.983 }' 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.983 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.243 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.243 [2024-10-25 17:49:00.667098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.243 [2024-10-25 17:49:00.667232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.503 [2024-10-25 17:49:00.755695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.503 [2024-10-25 17:49:00.755816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.503 [2024-10-25 17:49:00.755882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62509 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62509 ']' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62509 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62509 00:07:42.503 killing process with pid 62509 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62509' 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62509 00:07:42.503 [2024-10-25 17:49:00.840102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.503 17:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62509 00:07:42.503 [2024-10-25 17:49:00.856475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.887 00:07:43.887 real 0m4.756s 00:07:43.887 user 0m6.832s 00:07:43.887 sys 0m0.790s 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.887 ************************************ 00:07:43.887 END TEST raid_state_function_test 00:07:43.887 ************************************ 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.887 17:49:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:43.887 17:49:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:43.887 17:49:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.887 17:49:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.887 ************************************ 00:07:43.887 START TEST raid_state_function_test_sb 00:07:43.887 ************************************ 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62757 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62757' 00:07:43.887 Process raid pid: 62757 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62757 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62757 ']' 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.887 17:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.887 [2024-10-25 17:49:02.051589] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:43.887 [2024-10-25 17:49:02.051773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.887 [2024-10-25 17:49:02.226156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.147 [2024-10-25 17:49:02.331760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.147 [2024-10-25 17:49:02.527632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.147 [2024-10-25 17:49:02.527661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 [2024-10-25 17:49:02.868307] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.717 [2024-10-25 17:49:02.868358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.717 [2024-10-25 17:49:02.868368] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.717 [2024-10-25 17:49:02.868377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.717 "name": "Existed_Raid", 00:07:44.717 "uuid": "1275423e-f1e1-4962-8867-83019b8f1d5f", 00:07:44.717 "strip_size_kb": 0, 00:07:44.717 "state": "configuring", 00:07:44.717 "raid_level": "raid1", 00:07:44.717 "superblock": true, 00:07:44.717 "num_base_bdevs": 2, 00:07:44.717 "num_base_bdevs_discovered": 0, 00:07:44.717 "num_base_bdevs_operational": 2, 00:07:44.717 "base_bdevs_list": [ 00:07:44.717 { 00:07:44.717 "name": "BaseBdev1", 00:07:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.717 "is_configured": false, 00:07:44.717 "data_offset": 0, 00:07:44.717 "data_size": 0 00:07:44.717 }, 00:07:44.717 { 00:07:44.717 "name": "BaseBdev2", 00:07:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.717 "is_configured": false, 00:07:44.717 "data_offset": 0, 00:07:44.717 "data_size": 0 00:07:44.717 } 00:07:44.717 ] 00:07:44.717 }' 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.717 17:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 [2024-10-25 17:49:03.255584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.977 [2024-10-25 17:49:03.255662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 [2024-10-25 17:49:03.267575] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.977 [2024-10-25 17:49:03.267652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.977 [2024-10-25 17:49:03.267679] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.977 [2024-10-25 17:49:03.267703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 [2024-10-25 17:49:03.313399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.977 BaseBdev1 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.977 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.977 [ 00:07:44.977 { 00:07:44.977 "name": "BaseBdev1", 00:07:44.977 "aliases": [ 00:07:44.977 "90116697-2852-4a5c-a593-904f224cdc0e" 00:07:44.977 ], 00:07:44.977 "product_name": "Malloc disk", 00:07:44.977 "block_size": 512, 00:07:44.977 "num_blocks": 65536, 00:07:44.977 "uuid": "90116697-2852-4a5c-a593-904f224cdc0e", 00:07:44.977 "assigned_rate_limits": { 00:07:44.977 "rw_ios_per_sec": 0, 00:07:44.977 "rw_mbytes_per_sec": 0, 00:07:44.977 "r_mbytes_per_sec": 0, 00:07:44.977 "w_mbytes_per_sec": 0 00:07:44.978 }, 00:07:44.978 "claimed": true, 00:07:44.978 "claim_type": "exclusive_write", 00:07:44.978 "zoned": false, 00:07:44.978 "supported_io_types": { 00:07:44.978 "read": true, 00:07:44.978 "write": true, 00:07:44.978 "unmap": true, 00:07:44.978 "flush": true, 00:07:44.978 "reset": true, 00:07:44.978 "nvme_admin": false, 00:07:44.978 "nvme_io": false, 00:07:44.978 "nvme_io_md": false, 00:07:44.978 "write_zeroes": true, 00:07:44.978 "zcopy": true, 00:07:44.978 "get_zone_info": false, 00:07:44.978 "zone_management": false, 00:07:44.978 "zone_append": false, 00:07:44.978 "compare": false, 00:07:44.978 "compare_and_write": false, 00:07:44.978 "abort": true, 00:07:44.978 "seek_hole": false, 00:07:44.978 "seek_data": false, 00:07:44.978 "copy": true, 00:07:44.978 "nvme_iov_md": false 00:07:44.978 }, 00:07:44.978 "memory_domains": [ 00:07:44.978 { 00:07:44.978 "dma_device_id": "system", 00:07:44.978 "dma_device_type": 1 00:07:44.978 }, 00:07:44.978 { 00:07:44.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.978 "dma_device_type": 2 00:07:44.978 } 00:07:44.978 ], 00:07:44.978 "driver_specific": {} 00:07:44.978 } 00:07:44.978 ] 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.978 "name": "Existed_Raid", 00:07:44.978 "uuid": "a6ba3502-7787-45f8-a415-88850efe67b2", 00:07:44.978 "strip_size_kb": 0, 00:07:44.978 "state": "configuring", 00:07:44.978 "raid_level": "raid1", 00:07:44.978 "superblock": true, 00:07:44.978 "num_base_bdevs": 2, 00:07:44.978 "num_base_bdevs_discovered": 1, 00:07:44.978 "num_base_bdevs_operational": 2, 00:07:44.978 "base_bdevs_list": [ 00:07:44.978 { 00:07:44.978 "name": "BaseBdev1", 00:07:44.978 "uuid": "90116697-2852-4a5c-a593-904f224cdc0e", 00:07:44.978 "is_configured": true, 00:07:44.978 "data_offset": 2048, 00:07:44.978 "data_size": 63488 00:07:44.978 }, 00:07:44.978 { 00:07:44.978 "name": "BaseBdev2", 00:07:44.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.978 "is_configured": false, 00:07:44.978 "data_offset": 0, 00:07:44.978 "data_size": 0 00:07:44.978 } 00:07:44.978 ] 00:07:44.978 }' 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.978 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.547 [2024-10-25 17:49:03.760708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.547 [2024-10-25 17:49:03.760766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.547 [2024-10-25 17:49:03.772742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.547 [2024-10-25 17:49:03.774545] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.547 [2024-10-25 17:49:03.774589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.547 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.547 "name": "Existed_Raid", 00:07:45.547 "uuid": "dabb9725-8b29-4b28-bccd-3233b2d68e83", 00:07:45.547 "strip_size_kb": 0, 00:07:45.547 "state": "configuring", 00:07:45.548 "raid_level": "raid1", 00:07:45.548 "superblock": true, 00:07:45.548 "num_base_bdevs": 2, 00:07:45.548 "num_base_bdevs_discovered": 1, 00:07:45.548 "num_base_bdevs_operational": 2, 00:07:45.548 "base_bdevs_list": [ 00:07:45.548 { 00:07:45.548 "name": "BaseBdev1", 00:07:45.548 "uuid": "90116697-2852-4a5c-a593-904f224cdc0e", 00:07:45.548 "is_configured": true, 00:07:45.548 "data_offset": 2048, 00:07:45.548 "data_size": 63488 00:07:45.548 }, 00:07:45.548 { 00:07:45.548 "name": "BaseBdev2", 00:07:45.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.548 "is_configured": false, 00:07:45.548 "data_offset": 0, 00:07:45.548 "data_size": 0 00:07:45.548 } 00:07:45.548 ] 00:07:45.548 }' 00:07:45.548 17:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.548 17:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 [2024-10-25 17:49:04.198178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.808 [2024-10-25 17:49:04.198507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.808 [2024-10-25 17:49:04.198558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:45.808 [2024-10-25 17:49:04.198837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.808 BaseBdev2 00:07:45.808 [2024-10-25 17:49:04.199025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.808 [2024-10-25 17:49:04.199040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.808 [2024-10-25 17:49:04.199177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.808 [ 00:07:45.808 { 00:07:45.808 "name": "BaseBdev2", 00:07:45.808 "aliases": [ 00:07:45.808 "bad24704-6230-42d6-9d03-a725eeaeedea" 00:07:45.808 ], 00:07:45.808 "product_name": "Malloc disk", 00:07:45.808 "block_size": 512, 00:07:45.808 "num_blocks": 65536, 00:07:45.808 "uuid": "bad24704-6230-42d6-9d03-a725eeaeedea", 00:07:45.808 "assigned_rate_limits": { 00:07:45.808 "rw_ios_per_sec": 0, 00:07:45.808 "rw_mbytes_per_sec": 0, 00:07:45.808 "r_mbytes_per_sec": 0, 00:07:45.808 "w_mbytes_per_sec": 0 00:07:45.808 }, 00:07:45.808 "claimed": true, 00:07:45.808 "claim_type": "exclusive_write", 00:07:45.808 "zoned": false, 00:07:45.808 "supported_io_types": { 00:07:45.808 "read": true, 00:07:45.808 "write": true, 00:07:45.808 "unmap": true, 00:07:45.808 "flush": true, 00:07:45.808 "reset": true, 00:07:45.808 "nvme_admin": false, 00:07:45.808 "nvme_io": false, 00:07:45.808 "nvme_io_md": false, 00:07:45.808 "write_zeroes": true, 00:07:45.808 "zcopy": true, 00:07:45.808 "get_zone_info": false, 00:07:45.808 "zone_management": false, 00:07:45.808 "zone_append": false, 00:07:45.808 "compare": false, 00:07:45.808 "compare_and_write": false, 00:07:45.808 "abort": true, 00:07:45.808 "seek_hole": false, 00:07:45.808 "seek_data": false, 00:07:45.808 "copy": true, 00:07:45.808 "nvme_iov_md": false 00:07:45.808 }, 00:07:45.808 "memory_domains": [ 00:07:45.808 { 00:07:45.808 "dma_device_id": "system", 00:07:45.808 "dma_device_type": 1 00:07:45.808 }, 00:07:45.808 { 00:07:45.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.808 "dma_device_type": 2 00:07:45.808 } 00:07:45.808 ], 00:07:45.808 "driver_specific": {} 00:07:45.808 } 00:07:45.808 ] 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.808 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.809 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.809 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.809 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.069 "name": "Existed_Raid", 00:07:46.069 "uuid": "dabb9725-8b29-4b28-bccd-3233b2d68e83", 00:07:46.069 "strip_size_kb": 0, 00:07:46.069 "state": "online", 00:07:46.069 "raid_level": "raid1", 00:07:46.069 "superblock": true, 00:07:46.069 "num_base_bdevs": 2, 00:07:46.069 "num_base_bdevs_discovered": 2, 00:07:46.069 "num_base_bdevs_operational": 2, 00:07:46.069 "base_bdevs_list": [ 00:07:46.069 { 00:07:46.069 "name": "BaseBdev1", 00:07:46.069 "uuid": "90116697-2852-4a5c-a593-904f224cdc0e", 00:07:46.069 "is_configured": true, 00:07:46.069 "data_offset": 2048, 00:07:46.069 "data_size": 63488 00:07:46.069 }, 00:07:46.069 { 00:07:46.069 "name": "BaseBdev2", 00:07:46.069 "uuid": "bad24704-6230-42d6-9d03-a725eeaeedea", 00:07:46.069 "is_configured": true, 00:07:46.069 "data_offset": 2048, 00:07:46.069 "data_size": 63488 00:07:46.069 } 00:07:46.069 ] 00:07:46.069 }' 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.069 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.328 [2024-10-25 17:49:04.685608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.328 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.328 "name": "Existed_Raid", 00:07:46.328 "aliases": [ 00:07:46.328 "dabb9725-8b29-4b28-bccd-3233b2d68e83" 00:07:46.328 ], 00:07:46.328 "product_name": "Raid Volume", 00:07:46.328 "block_size": 512, 00:07:46.328 "num_blocks": 63488, 00:07:46.328 "uuid": "dabb9725-8b29-4b28-bccd-3233b2d68e83", 00:07:46.328 "assigned_rate_limits": { 00:07:46.328 "rw_ios_per_sec": 0, 00:07:46.328 "rw_mbytes_per_sec": 0, 00:07:46.328 "r_mbytes_per_sec": 0, 00:07:46.328 "w_mbytes_per_sec": 0 00:07:46.328 }, 00:07:46.328 "claimed": false, 00:07:46.328 "zoned": false, 00:07:46.328 "supported_io_types": { 00:07:46.328 "read": true, 00:07:46.328 "write": true, 00:07:46.328 "unmap": false, 00:07:46.328 "flush": false, 00:07:46.328 "reset": true, 00:07:46.328 "nvme_admin": false, 00:07:46.328 "nvme_io": false, 00:07:46.328 "nvme_io_md": false, 00:07:46.328 "write_zeroes": true, 00:07:46.328 "zcopy": false, 00:07:46.328 "get_zone_info": false, 00:07:46.328 "zone_management": false, 00:07:46.328 "zone_append": false, 00:07:46.328 "compare": false, 00:07:46.328 "compare_and_write": false, 00:07:46.328 "abort": false, 00:07:46.328 "seek_hole": false, 00:07:46.328 "seek_data": false, 00:07:46.328 "copy": false, 00:07:46.328 "nvme_iov_md": false 00:07:46.328 }, 00:07:46.328 "memory_domains": [ 00:07:46.328 { 00:07:46.328 "dma_device_id": "system", 00:07:46.328 "dma_device_type": 1 00:07:46.328 }, 00:07:46.328 { 00:07:46.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.328 "dma_device_type": 2 00:07:46.328 }, 00:07:46.328 { 00:07:46.328 "dma_device_id": "system", 00:07:46.329 "dma_device_type": 1 00:07:46.329 }, 00:07:46.329 { 00:07:46.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.329 "dma_device_type": 2 00:07:46.329 } 00:07:46.329 ], 00:07:46.329 "driver_specific": { 00:07:46.329 "raid": { 00:07:46.329 "uuid": "dabb9725-8b29-4b28-bccd-3233b2d68e83", 00:07:46.329 "strip_size_kb": 0, 00:07:46.329 "state": "online", 00:07:46.329 "raid_level": "raid1", 00:07:46.329 "superblock": true, 00:07:46.329 "num_base_bdevs": 2, 00:07:46.329 "num_base_bdevs_discovered": 2, 00:07:46.329 "num_base_bdevs_operational": 2, 00:07:46.329 "base_bdevs_list": [ 00:07:46.329 { 00:07:46.329 "name": "BaseBdev1", 00:07:46.329 "uuid": "90116697-2852-4a5c-a593-904f224cdc0e", 00:07:46.329 "is_configured": true, 00:07:46.329 "data_offset": 2048, 00:07:46.329 "data_size": 63488 00:07:46.329 }, 00:07:46.329 { 00:07:46.329 "name": "BaseBdev2", 00:07:46.329 "uuid": "bad24704-6230-42d6-9d03-a725eeaeedea", 00:07:46.329 "is_configured": true, 00:07:46.329 "data_offset": 2048, 00:07:46.329 "data_size": 63488 00:07:46.329 } 00:07:46.329 ] 00:07:46.329 } 00:07:46.329 } 00:07:46.329 }' 00:07:46.329 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.589 BaseBdev2' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 17:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 [2024-10-25 17:49:04.921004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.849 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.849 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.849 "name": "Existed_Raid", 00:07:46.849 "uuid": "dabb9725-8b29-4b28-bccd-3233b2d68e83", 00:07:46.849 "strip_size_kb": 0, 00:07:46.849 "state": "online", 00:07:46.849 "raid_level": "raid1", 00:07:46.849 "superblock": true, 00:07:46.849 "num_base_bdevs": 2, 00:07:46.849 "num_base_bdevs_discovered": 1, 00:07:46.849 "num_base_bdevs_operational": 1, 00:07:46.849 "base_bdevs_list": [ 00:07:46.849 { 00:07:46.849 "name": null, 00:07:46.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.849 "is_configured": false, 00:07:46.849 "data_offset": 0, 00:07:46.849 "data_size": 63488 00:07:46.849 }, 00:07:46.849 { 00:07:46.849 "name": "BaseBdev2", 00:07:46.849 "uuid": "bad24704-6230-42d6-9d03-a725eeaeedea", 00:07:46.849 "is_configured": true, 00:07:46.849 "data_offset": 2048, 00:07:46.849 "data_size": 63488 00:07:46.849 } 00:07:46.849 ] 00:07:46.849 }' 00:07:46.849 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.849 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.109 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.109 [2024-10-25 17:49:05.513447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.109 [2024-10-25 17:49:05.513552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.369 [2024-10-25 17:49:05.604203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.369 [2024-10-25 17:49:05.604321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.369 [2024-10-25 17:49:05.604362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62757 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62757 ']' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62757 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62757 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62757' 00:07:47.369 killing process with pid 62757 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62757 00:07:47.369 [2024-10-25 17:49:05.679401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.369 17:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62757 00:07:47.369 [2024-10-25 17:49:05.695846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.307 17:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.307 00:07:48.307 real 0m4.772s 00:07:48.307 user 0m6.810s 00:07:48.307 sys 0m0.815s 00:07:48.307 17:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.307 17:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 ************************************ 00:07:48.307 END TEST raid_state_function_test_sb 00:07:48.307 ************************************ 00:07:48.566 17:49:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:48.566 17:49:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:48.566 17:49:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.566 17:49:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.566 ************************************ 00:07:48.566 START TEST raid_superblock_test 00:07:48.566 ************************************ 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63003 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63003 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63003 ']' 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.566 17:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.566 [2024-10-25 17:49:06.896499] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:48.566 [2024-10-25 17:49:06.896753] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:07:48.828 [2024-10-25 17:49:07.076332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.828 [2024-10-25 17:49:07.180427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.118 [2024-10-25 17:49:07.358910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.118 [2024-10-25 17:49:07.359047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.393 malloc1 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.393 [2024-10-25 17:49:07.750444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.393 [2024-10-25 17:49:07.750549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.393 [2024-10-25 17:49:07.750592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.393 [2024-10-25 17:49:07.750621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.393 [2024-10-25 17:49:07.752621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.393 [2024-10-25 17:49:07.752693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.393 pt1 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.393 malloc2 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.393 [2024-10-25 17:49:07.808425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.393 [2024-10-25 17:49:07.808516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.393 [2024-10-25 17:49:07.808553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:49.393 [2024-10-25 17:49:07.808581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.393 [2024-10-25 17:49:07.810617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.393 [2024-10-25 17:49:07.810686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.393 pt2 00:07:49.393 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.394 [2024-10-25 17:49:07.820462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.394 [2024-10-25 17:49:07.822248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.394 [2024-10-25 17:49:07.822408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.394 [2024-10-25 17:49:07.822426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:49.394 [2024-10-25 17:49:07.822647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.394 [2024-10-25 17:49:07.822812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.394 [2024-10-25 17:49:07.822838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:49.394 [2024-10-25 17:49:07.823006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.394 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.654 "name": "raid_bdev1", 00:07:49.654 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:49.654 "strip_size_kb": 0, 00:07:49.654 "state": "online", 00:07:49.654 "raid_level": "raid1", 00:07:49.654 "superblock": true, 00:07:49.654 "num_base_bdevs": 2, 00:07:49.654 "num_base_bdevs_discovered": 2, 00:07:49.654 "num_base_bdevs_operational": 2, 00:07:49.654 "base_bdevs_list": [ 00:07:49.654 { 00:07:49.654 "name": "pt1", 00:07:49.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.654 "is_configured": true, 00:07:49.654 "data_offset": 2048, 00:07:49.654 "data_size": 63488 00:07:49.654 }, 00:07:49.654 { 00:07:49.654 "name": "pt2", 00:07:49.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.654 "is_configured": true, 00:07:49.654 "data_offset": 2048, 00:07:49.654 "data_size": 63488 00:07:49.654 } 00:07:49.654 ] 00:07:49.654 }' 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.654 17:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.914 [2024-10-25 17:49:08.244269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.914 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.914 "name": "raid_bdev1", 00:07:49.914 "aliases": [ 00:07:49.914 "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f" 00:07:49.914 ], 00:07:49.914 "product_name": "Raid Volume", 00:07:49.914 "block_size": 512, 00:07:49.914 "num_blocks": 63488, 00:07:49.914 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:49.914 "assigned_rate_limits": { 00:07:49.914 "rw_ios_per_sec": 0, 00:07:49.914 "rw_mbytes_per_sec": 0, 00:07:49.914 "r_mbytes_per_sec": 0, 00:07:49.914 "w_mbytes_per_sec": 0 00:07:49.914 }, 00:07:49.914 "claimed": false, 00:07:49.914 "zoned": false, 00:07:49.914 "supported_io_types": { 00:07:49.914 "read": true, 00:07:49.914 "write": true, 00:07:49.914 "unmap": false, 00:07:49.914 "flush": false, 00:07:49.914 "reset": true, 00:07:49.914 "nvme_admin": false, 00:07:49.914 "nvme_io": false, 00:07:49.914 "nvme_io_md": false, 00:07:49.914 "write_zeroes": true, 00:07:49.914 "zcopy": false, 00:07:49.914 "get_zone_info": false, 00:07:49.914 "zone_management": false, 00:07:49.914 "zone_append": false, 00:07:49.914 "compare": false, 00:07:49.914 "compare_and_write": false, 00:07:49.914 "abort": false, 00:07:49.914 "seek_hole": false, 00:07:49.914 "seek_data": false, 00:07:49.914 "copy": false, 00:07:49.914 "nvme_iov_md": false 00:07:49.914 }, 00:07:49.914 "memory_domains": [ 00:07:49.914 { 00:07:49.914 "dma_device_id": "system", 00:07:49.914 "dma_device_type": 1 00:07:49.914 }, 00:07:49.914 { 00:07:49.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.914 "dma_device_type": 2 00:07:49.914 }, 00:07:49.914 { 00:07:49.914 "dma_device_id": "system", 00:07:49.914 "dma_device_type": 1 00:07:49.914 }, 00:07:49.914 { 00:07:49.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.914 "dma_device_type": 2 00:07:49.914 } 00:07:49.914 ], 00:07:49.914 "driver_specific": { 00:07:49.914 "raid": { 00:07:49.914 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:49.914 "strip_size_kb": 0, 00:07:49.915 "state": "online", 00:07:49.915 "raid_level": "raid1", 00:07:49.915 "superblock": true, 00:07:49.915 "num_base_bdevs": 2, 00:07:49.915 "num_base_bdevs_discovered": 2, 00:07:49.915 "num_base_bdevs_operational": 2, 00:07:49.915 "base_bdevs_list": [ 00:07:49.915 { 00:07:49.915 "name": "pt1", 00:07:49.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.915 "is_configured": true, 00:07:49.915 "data_offset": 2048, 00:07:49.915 "data_size": 63488 00:07:49.915 }, 00:07:49.915 { 00:07:49.915 "name": "pt2", 00:07:49.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.915 "is_configured": true, 00:07:49.915 "data_offset": 2048, 00:07:49.915 "data_size": 63488 00:07:49.915 } 00:07:49.915 ] 00:07:49.915 } 00:07:49.915 } 00:07:49.915 }' 00:07:49.915 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.915 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.915 pt2' 00:07:49.915 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.174 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.174 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.174 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.175 [2024-10-25 17:49:08.463785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f ']' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 [2024-10-25 17:49:08.511406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.175 [2024-10-25 17:49:08.511431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.175 [2024-10-25 17:49:08.511513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.175 [2024-10-25 17:49:08.511572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.175 [2024-10-25 17:49:08.511584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.175 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.436 [2024-10-25 17:49:08.651199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.436 [2024-10-25 17:49:08.653062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.436 [2024-10-25 17:49:08.653171] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.436 [2024-10-25 17:49:08.653264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.436 [2024-10-25 17:49:08.653314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.436 [2024-10-25 17:49:08.653348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:50.436 request: 00:07:50.436 { 00:07:50.436 "name": "raid_bdev1", 00:07:50.436 "raid_level": "raid1", 00:07:50.436 "base_bdevs": [ 00:07:50.436 "malloc1", 00:07:50.436 "malloc2" 00:07:50.436 ], 00:07:50.436 "superblock": false, 00:07:50.436 "method": "bdev_raid_create", 00:07:50.436 "req_id": 1 00:07:50.436 } 00:07:50.436 Got JSON-RPC error response 00:07:50.436 response: 00:07:50.436 { 00:07:50.436 "code": -17, 00:07:50.436 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.436 } 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.436 [2024-10-25 17:49:08.715083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.436 [2024-10-25 17:49:08.715193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.436 [2024-10-25 17:49:08.715226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.436 [2024-10-25 17:49:08.715256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.436 [2024-10-25 17:49:08.717432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.436 [2024-10-25 17:49:08.717510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.436 [2024-10-25 17:49:08.717618] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.436 [2024-10-25 17:49:08.717700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.436 pt1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.436 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.437 "name": "raid_bdev1", 00:07:50.437 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:50.437 "strip_size_kb": 0, 00:07:50.437 "state": "configuring", 00:07:50.437 "raid_level": "raid1", 00:07:50.437 "superblock": true, 00:07:50.437 "num_base_bdevs": 2, 00:07:50.437 "num_base_bdevs_discovered": 1, 00:07:50.437 "num_base_bdevs_operational": 2, 00:07:50.437 "base_bdevs_list": [ 00:07:50.437 { 00:07:50.437 "name": "pt1", 00:07:50.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.437 "is_configured": true, 00:07:50.437 "data_offset": 2048, 00:07:50.437 "data_size": 63488 00:07:50.437 }, 00:07:50.437 { 00:07:50.437 "name": null, 00:07:50.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.437 "is_configured": false, 00:07:50.437 "data_offset": 2048, 00:07:50.437 "data_size": 63488 00:07:50.437 } 00:07:50.437 ] 00:07:50.437 }' 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.437 17:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.697 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.697 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.697 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.697 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.697 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.698 [2024-10-25 17:49:09.110436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.698 [2024-10-25 17:49:09.110569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.698 [2024-10-25 17:49:09.110597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.698 [2024-10-25 17:49:09.110608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.698 [2024-10-25 17:49:09.111104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.698 [2024-10-25 17:49:09.111133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.698 [2024-10-25 17:49:09.111216] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.698 [2024-10-25 17:49:09.111242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.698 [2024-10-25 17:49:09.111367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.698 [2024-10-25 17:49:09.111384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.698 [2024-10-25 17:49:09.111614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.698 [2024-10-25 17:49:09.111767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.698 [2024-10-25 17:49:09.111776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.698 [2024-10-25 17:49:09.111928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.698 pt2 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.698 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.958 "name": "raid_bdev1", 00:07:50.958 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:50.958 "strip_size_kb": 0, 00:07:50.958 "state": "online", 00:07:50.958 "raid_level": "raid1", 00:07:50.958 "superblock": true, 00:07:50.958 "num_base_bdevs": 2, 00:07:50.958 "num_base_bdevs_discovered": 2, 00:07:50.958 "num_base_bdevs_operational": 2, 00:07:50.958 "base_bdevs_list": [ 00:07:50.958 { 00:07:50.958 "name": "pt1", 00:07:50.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.958 "is_configured": true, 00:07:50.958 "data_offset": 2048, 00:07:50.958 "data_size": 63488 00:07:50.958 }, 00:07:50.958 { 00:07:50.958 "name": "pt2", 00:07:50.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.958 "is_configured": true, 00:07:50.958 "data_offset": 2048, 00:07:50.958 "data_size": 63488 00:07:50.958 } 00:07:50.958 ] 00:07:50.958 }' 00:07:50.958 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.958 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.219 [2024-10-25 17:49:09.553878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.219 "name": "raid_bdev1", 00:07:51.219 "aliases": [ 00:07:51.219 "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f" 00:07:51.219 ], 00:07:51.219 "product_name": "Raid Volume", 00:07:51.219 "block_size": 512, 00:07:51.219 "num_blocks": 63488, 00:07:51.219 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:51.219 "assigned_rate_limits": { 00:07:51.219 "rw_ios_per_sec": 0, 00:07:51.219 "rw_mbytes_per_sec": 0, 00:07:51.219 "r_mbytes_per_sec": 0, 00:07:51.219 "w_mbytes_per_sec": 0 00:07:51.219 }, 00:07:51.219 "claimed": false, 00:07:51.219 "zoned": false, 00:07:51.219 "supported_io_types": { 00:07:51.219 "read": true, 00:07:51.219 "write": true, 00:07:51.219 "unmap": false, 00:07:51.219 "flush": false, 00:07:51.219 "reset": true, 00:07:51.219 "nvme_admin": false, 00:07:51.219 "nvme_io": false, 00:07:51.219 "nvme_io_md": false, 00:07:51.219 "write_zeroes": true, 00:07:51.219 "zcopy": false, 00:07:51.219 "get_zone_info": false, 00:07:51.219 "zone_management": false, 00:07:51.219 "zone_append": false, 00:07:51.219 "compare": false, 00:07:51.219 "compare_and_write": false, 00:07:51.219 "abort": false, 00:07:51.219 "seek_hole": false, 00:07:51.219 "seek_data": false, 00:07:51.219 "copy": false, 00:07:51.219 "nvme_iov_md": false 00:07:51.219 }, 00:07:51.219 "memory_domains": [ 00:07:51.219 { 00:07:51.219 "dma_device_id": "system", 00:07:51.219 "dma_device_type": 1 00:07:51.219 }, 00:07:51.219 { 00:07:51.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.219 "dma_device_type": 2 00:07:51.219 }, 00:07:51.219 { 00:07:51.219 "dma_device_id": "system", 00:07:51.219 "dma_device_type": 1 00:07:51.219 }, 00:07:51.219 { 00:07:51.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.219 "dma_device_type": 2 00:07:51.219 } 00:07:51.219 ], 00:07:51.219 "driver_specific": { 00:07:51.219 "raid": { 00:07:51.219 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:51.219 "strip_size_kb": 0, 00:07:51.219 "state": "online", 00:07:51.219 "raid_level": "raid1", 00:07:51.219 "superblock": true, 00:07:51.219 "num_base_bdevs": 2, 00:07:51.219 "num_base_bdevs_discovered": 2, 00:07:51.219 "num_base_bdevs_operational": 2, 00:07:51.219 "base_bdevs_list": [ 00:07:51.219 { 00:07:51.219 "name": "pt1", 00:07:51.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.219 "is_configured": true, 00:07:51.219 "data_offset": 2048, 00:07:51.219 "data_size": 63488 00:07:51.219 }, 00:07:51.219 { 00:07:51.219 "name": "pt2", 00:07:51.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.219 "is_configured": true, 00:07:51.219 "data_offset": 2048, 00:07:51.219 "data_size": 63488 00:07:51.219 } 00:07:51.219 ] 00:07:51.219 } 00:07:51.219 } 00:07:51.219 }' 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.219 pt2' 00:07:51.219 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.480 [2024-10-25 17:49:09.737510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f '!=' b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f ']' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.480 [2024-10-25 17:49:09.761304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.480 "name": "raid_bdev1", 00:07:51.480 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:51.480 "strip_size_kb": 0, 00:07:51.480 "state": "online", 00:07:51.480 "raid_level": "raid1", 00:07:51.480 "superblock": true, 00:07:51.480 "num_base_bdevs": 2, 00:07:51.480 "num_base_bdevs_discovered": 1, 00:07:51.480 "num_base_bdevs_operational": 1, 00:07:51.480 "base_bdevs_list": [ 00:07:51.480 { 00:07:51.480 "name": null, 00:07:51.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.480 "is_configured": false, 00:07:51.480 "data_offset": 0, 00:07:51.480 "data_size": 63488 00:07:51.480 }, 00:07:51.480 { 00:07:51.480 "name": "pt2", 00:07:51.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.480 "is_configured": true, 00:07:51.480 "data_offset": 2048, 00:07:51.480 "data_size": 63488 00:07:51.480 } 00:07:51.480 ] 00:07:51.480 }' 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.480 17:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 [2024-10-25 17:49:10.240467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.051 [2024-10-25 17:49:10.240536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.051 [2024-10-25 17:49:10.240628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.051 [2024-10-25 17:49:10.240689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.051 [2024-10-25 17:49:10.240723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 [2024-10-25 17:49:10.316316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.051 [2024-10-25 17:49:10.316373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.051 [2024-10-25 17:49:10.316390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:52.051 [2024-10-25 17:49:10.316401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.051 [2024-10-25 17:49:10.318450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.051 [2024-10-25 17:49:10.318531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.051 [2024-10-25 17:49:10.318612] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.051 [2024-10-25 17:49:10.318657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.051 [2024-10-25 17:49:10.318757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:52.051 [2024-10-25 17:49:10.318770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.051 [2024-10-25 17:49:10.319005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:52.051 [2024-10-25 17:49:10.319149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:52.051 [2024-10-25 17:49:10.319158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:52.051 [2024-10-25 17:49:10.319294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.051 pt2 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.051 "name": "raid_bdev1", 00:07:52.051 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:52.051 "strip_size_kb": 0, 00:07:52.051 "state": "online", 00:07:52.051 "raid_level": "raid1", 00:07:52.051 "superblock": true, 00:07:52.051 "num_base_bdevs": 2, 00:07:52.051 "num_base_bdevs_discovered": 1, 00:07:52.051 "num_base_bdevs_operational": 1, 00:07:52.051 "base_bdevs_list": [ 00:07:52.051 { 00:07:52.051 "name": null, 00:07:52.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.051 "is_configured": false, 00:07:52.051 "data_offset": 2048, 00:07:52.051 "data_size": 63488 00:07:52.051 }, 00:07:52.051 { 00:07:52.051 "name": "pt2", 00:07:52.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.051 "is_configured": true, 00:07:52.051 "data_offset": 2048, 00:07:52.051 "data_size": 63488 00:07:52.051 } 00:07:52.051 ] 00:07:52.051 }' 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.051 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.311 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.312 [2024-10-25 17:49:10.711742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.312 [2024-10-25 17:49:10.711812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.312 [2024-10-25 17:49:10.711898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.312 [2024-10-25 17:49:10.711959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.312 [2024-10-25 17:49:10.712049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.312 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.570 [2024-10-25 17:49:10.755686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:52.570 [2024-10-25 17:49:10.755773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.570 [2024-10-25 17:49:10.755807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:52.570 [2024-10-25 17:49:10.755845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.570 [2024-10-25 17:49:10.757960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.570 [2024-10-25 17:49:10.758028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:52.570 [2024-10-25 17:49:10.758123] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:52.570 [2024-10-25 17:49:10.758178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:52.570 [2024-10-25 17:49:10.758310] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:52.570 [2024-10-25 17:49:10.758361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.570 [2024-10-25 17:49:10.758398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:52.570 [2024-10-25 17:49:10.758483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.570 [2024-10-25 17:49:10.758590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:52.570 [2024-10-25 17:49:10.758626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.570 [2024-10-25 17:49:10.758885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:52.570 [2024-10-25 17:49:10.759062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:52.570 [2024-10-25 17:49:10.759104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:52.570 [2024-10-25 17:49:10.759288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.570 pt1 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.570 "name": "raid_bdev1", 00:07:52.570 "uuid": "b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f", 00:07:52.570 "strip_size_kb": 0, 00:07:52.570 "state": "online", 00:07:52.570 "raid_level": "raid1", 00:07:52.570 "superblock": true, 00:07:52.570 "num_base_bdevs": 2, 00:07:52.570 "num_base_bdevs_discovered": 1, 00:07:52.570 "num_base_bdevs_operational": 1, 00:07:52.570 "base_bdevs_list": [ 00:07:52.570 { 00:07:52.570 "name": null, 00:07:52.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.570 "is_configured": false, 00:07:52.570 "data_offset": 2048, 00:07:52.570 "data_size": 63488 00:07:52.570 }, 00:07:52.570 { 00:07:52.570 "name": "pt2", 00:07:52.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.570 "is_configured": true, 00:07:52.570 "data_offset": 2048, 00:07:52.570 "data_size": 63488 00:07:52.570 } 00:07:52.570 ] 00:07:52.570 }' 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.570 17:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:52.829 [2024-10-25 17:49:11.247069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.829 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f '!=' b86b15bd-5f1e-4bf1-b3d8-f59236e1b28f ']' 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63003 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63003 ']' 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63003 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63003 00:07:53.089 killing process with pid 63003 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63003' 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63003 00:07:53.089 [2024-10-25 17:49:11.340579] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.089 [2024-10-25 17:49:11.340667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.089 [2024-10-25 17:49:11.340713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.089 [2024-10-25 17:49:11.340727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:53.089 17:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63003 00:07:53.348 [2024-10-25 17:49:11.538537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.289 ************************************ 00:07:54.289 END TEST raid_superblock_test 00:07:54.289 ************************************ 00:07:54.289 17:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:54.289 00:07:54.289 real 0m5.783s 00:07:54.289 user 0m8.718s 00:07:54.289 sys 0m1.028s 00:07:54.289 17:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.289 17:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.289 17:49:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:54.289 17:49:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.289 17:49:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.289 17:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.289 ************************************ 00:07:54.289 START TEST raid_read_error_test 00:07:54.289 ************************************ 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.289 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aevdHlgM0v 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63328 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63328 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63328 ']' 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.290 17:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.550 [2024-10-25 17:49:12.762637] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:54.550 [2024-10-25 17:49:12.762837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63328 ] 00:07:54.550 [2024-10-25 17:49:12.934709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.810 [2024-10-25 17:49:13.034174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.810 [2024-10-25 17:49:13.223550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.810 [2024-10-25 17:49:13.223632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.380 BaseBdev1_malloc 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.380 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 true 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 [2024-10-25 17:49:13.630160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.381 [2024-10-25 17:49:13.630252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.381 [2024-10-25 17:49:13.630275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:55.381 [2024-10-25 17:49:13.630285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.381 [2024-10-25 17:49:13.632287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.381 [2024-10-25 17:49:13.632327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.381 BaseBdev1 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 BaseBdev2_malloc 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 true 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 [2024-10-25 17:49:13.694420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:55.381 [2024-10-25 17:49:13.694473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.381 [2024-10-25 17:49:13.694488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:55.381 [2024-10-25 17:49:13.694498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.381 [2024-10-25 17:49:13.696513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.381 [2024-10-25 17:49:13.696555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:55.381 BaseBdev2 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 [2024-10-25 17:49:13.706458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.381 [2024-10-25 17:49:13.708243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.381 [2024-10-25 17:49:13.708432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.381 [2024-10-25 17:49:13.708449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.381 [2024-10-25 17:49:13.708662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.381 [2024-10-25 17:49:13.708857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.381 [2024-10-25 17:49:13.708869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.381 [2024-10-25 17:49:13.709005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.381 "name": "raid_bdev1", 00:07:55.381 "uuid": "2b9ddc53-708b-4ceb-b60c-ae2b229f6fa1", 00:07:55.381 "strip_size_kb": 0, 00:07:55.381 "state": "online", 00:07:55.381 "raid_level": "raid1", 00:07:55.381 "superblock": true, 00:07:55.381 "num_base_bdevs": 2, 00:07:55.381 "num_base_bdevs_discovered": 2, 00:07:55.381 "num_base_bdevs_operational": 2, 00:07:55.381 "base_bdevs_list": [ 00:07:55.381 { 00:07:55.381 "name": "BaseBdev1", 00:07:55.381 "uuid": "a6bf15b9-fc41-5b69-88ff-cbaff59f2963", 00:07:55.381 "is_configured": true, 00:07:55.381 "data_offset": 2048, 00:07:55.381 "data_size": 63488 00:07:55.381 }, 00:07:55.381 { 00:07:55.381 "name": "BaseBdev2", 00:07:55.381 "uuid": "2b755fa9-5484-5b23-a98b-f2f75d487084", 00:07:55.381 "is_configured": true, 00:07:55.381 "data_offset": 2048, 00:07:55.381 "data_size": 63488 00:07:55.381 } 00:07:55.381 ] 00:07:55.381 }' 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.381 17:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.950 17:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.950 17:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.950 [2024-10-25 17:49:14.230684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.890 "name": "raid_bdev1", 00:07:56.890 "uuid": "2b9ddc53-708b-4ceb-b60c-ae2b229f6fa1", 00:07:56.890 "strip_size_kb": 0, 00:07:56.890 "state": "online", 00:07:56.890 "raid_level": "raid1", 00:07:56.890 "superblock": true, 00:07:56.890 "num_base_bdevs": 2, 00:07:56.890 "num_base_bdevs_discovered": 2, 00:07:56.890 "num_base_bdevs_operational": 2, 00:07:56.890 "base_bdevs_list": [ 00:07:56.890 { 00:07:56.890 "name": "BaseBdev1", 00:07:56.890 "uuid": "a6bf15b9-fc41-5b69-88ff-cbaff59f2963", 00:07:56.890 "is_configured": true, 00:07:56.890 "data_offset": 2048, 00:07:56.890 "data_size": 63488 00:07:56.890 }, 00:07:56.890 { 00:07:56.890 "name": "BaseBdev2", 00:07:56.890 "uuid": "2b755fa9-5484-5b23-a98b-f2f75d487084", 00:07:56.890 "is_configured": true, 00:07:56.890 "data_offset": 2048, 00:07:56.890 "data_size": 63488 00:07:56.890 } 00:07:56.890 ] 00:07:56.890 }' 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.890 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.150 [2024-10-25 17:49:15.570137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.150 [2024-10-25 17:49:15.570175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.150 [2024-10-25 17:49:15.572694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.150 [2024-10-25 17:49:15.572738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.150 [2024-10-25 17:49:15.572814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.150 [2024-10-25 17:49:15.572910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.150 { 00:07:57.150 "results": [ 00:07:57.150 { 00:07:57.150 "job": "raid_bdev1", 00:07:57.150 "core_mask": "0x1", 00:07:57.150 "workload": "randrw", 00:07:57.150 "percentage": 50, 00:07:57.150 "status": "finished", 00:07:57.150 "queue_depth": 1, 00:07:57.150 "io_size": 131072, 00:07:57.150 "runtime": 1.340238, 00:07:57.150 "iops": 19293.588153745826, 00:07:57.150 "mibps": 2411.6985192182283, 00:07:57.150 "io_failed": 0, 00:07:57.150 "io_timeout": 0, 00:07:57.150 "avg_latency_us": 49.421681937055624, 00:07:57.150 "min_latency_us": 21.799126637554586, 00:07:57.150 "max_latency_us": 1416.6078602620087 00:07:57.150 } 00:07:57.150 ], 00:07:57.150 "core_count": 1 00:07:57.150 } 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63328 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63328 ']' 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63328 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:57.150 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63328 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63328' 00:07:57.410 killing process with pid 63328 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63328 00:07:57.410 [2024-10-25 17:49:15.616657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.410 17:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63328 00:07:57.410 [2024-10-25 17:49:15.744192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aevdHlgM0v 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:58.792 00:07:58.792 real 0m4.210s 00:07:58.792 user 0m5.009s 00:07:58.792 sys 0m0.529s 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.792 17:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.792 ************************************ 00:07:58.792 END TEST raid_read_error_test 00:07:58.792 ************************************ 00:07:58.792 17:49:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:58.792 17:49:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.792 17:49:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.792 17:49:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.792 ************************************ 00:07:58.792 START TEST raid_write_error_test 00:07:58.792 ************************************ 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:58.792 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RQijKO3dET 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63468 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63468 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63468 ']' 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.793 17:49:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 [2024-10-25 17:49:17.038603] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:58.793 [2024-10-25 17:49:17.038801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63468 ] 00:07:58.793 [2024-10-25 17:49:17.211338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.053 [2024-10-25 17:49:17.320990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.313 [2024-10-25 17:49:17.516294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.313 [2024-10-25 17:49:17.516382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.574 BaseBdev1_malloc 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.574 true 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.574 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.575 [2024-10-25 17:49:17.919558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.575 [2024-10-25 17:49:17.919612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.575 [2024-10-25 17:49:17.919630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:59.575 [2024-10-25 17:49:17.919640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.575 [2024-10-25 17:49:17.921664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.575 [2024-10-25 17:49:17.921746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.575 BaseBdev1 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.575 BaseBdev2_malloc 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.575 true 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.575 [2024-10-25 17:49:17.986817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:59.575 [2024-10-25 17:49:17.986877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.575 [2024-10-25 17:49:17.986893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.575 [2024-10-25 17:49:17.986903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.575 [2024-10-25 17:49:17.988893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.575 [2024-10-25 17:49:17.988930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:59.575 BaseBdev2 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.575 17:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.575 [2024-10-25 17:49:17.998862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.575 [2024-10-25 17:49:18.000605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.575 [2024-10-25 17:49:18.000801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.575 [2024-10-25 17:49:18.000816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.575 [2024-10-25 17:49:18.001043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.575 [2024-10-25 17:49:18.001218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.575 [2024-10-25 17:49:18.001229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.575 [2024-10-25 17:49:18.001367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.575 17:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.835 "name": "raid_bdev1", 00:07:59.835 "uuid": "dabc9618-3cbe-4ee7-9451-8ef4c8927896", 00:07:59.835 "strip_size_kb": 0, 00:07:59.835 "state": "online", 00:07:59.835 "raid_level": "raid1", 00:07:59.835 "superblock": true, 00:07:59.835 "num_base_bdevs": 2, 00:07:59.835 "num_base_bdevs_discovered": 2, 00:07:59.835 "num_base_bdevs_operational": 2, 00:07:59.835 "base_bdevs_list": [ 00:07:59.835 { 00:07:59.835 "name": "BaseBdev1", 00:07:59.835 "uuid": "dd033abd-25ff-5dcb-99aa-b2a97833de9f", 00:07:59.835 "is_configured": true, 00:07:59.835 "data_offset": 2048, 00:07:59.835 "data_size": 63488 00:07:59.835 }, 00:07:59.835 { 00:07:59.835 "name": "BaseBdev2", 00:07:59.835 "uuid": "2304e546-fae5-500d-ac69-7abf548cdc89", 00:07:59.835 "is_configured": true, 00:07:59.835 "data_offset": 2048, 00:07:59.835 "data_size": 63488 00:07:59.835 } 00:07:59.835 ] 00:07:59.835 }' 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.835 17:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.094 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:00.094 17:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:00.094 [2024-10-25 17:49:18.475076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.033 [2024-10-25 17:49:19.391374] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:01.033 [2024-10-25 17:49:19.391517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.033 [2024-10-25 17:49:19.391756] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.033 "name": "raid_bdev1", 00:08:01.033 "uuid": "dabc9618-3cbe-4ee7-9451-8ef4c8927896", 00:08:01.033 "strip_size_kb": 0, 00:08:01.033 "state": "online", 00:08:01.033 "raid_level": "raid1", 00:08:01.033 "superblock": true, 00:08:01.033 "num_base_bdevs": 2, 00:08:01.033 "num_base_bdevs_discovered": 1, 00:08:01.033 "num_base_bdevs_operational": 1, 00:08:01.033 "base_bdevs_list": [ 00:08:01.033 { 00:08:01.033 "name": null, 00:08:01.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.033 "is_configured": false, 00:08:01.033 "data_offset": 0, 00:08:01.033 "data_size": 63488 00:08:01.033 }, 00:08:01.033 { 00:08:01.033 "name": "BaseBdev2", 00:08:01.033 "uuid": "2304e546-fae5-500d-ac69-7abf548cdc89", 00:08:01.033 "is_configured": true, 00:08:01.033 "data_offset": 2048, 00:08:01.033 "data_size": 63488 00:08:01.033 } 00:08:01.033 ] 00:08:01.033 }' 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.033 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.602 [2024-10-25 17:49:19.804172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.602 [2024-10-25 17:49:19.804205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.602 [2024-10-25 17:49:19.806756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.602 [2024-10-25 17:49:19.806799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.602 [2024-10-25 17:49:19.806864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.602 [2024-10-25 17:49:19.806876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.602 { 00:08:01.602 "results": [ 00:08:01.602 { 00:08:01.602 "job": "raid_bdev1", 00:08:01.602 "core_mask": "0x1", 00:08:01.602 "workload": "randrw", 00:08:01.602 "percentage": 50, 00:08:01.602 "status": "finished", 00:08:01.602 "queue_depth": 1, 00:08:01.602 "io_size": 131072, 00:08:01.602 "runtime": 1.329815, 00:08:01.602 "iops": 22019.604230663663, 00:08:01.602 "mibps": 2752.450528832958, 00:08:01.602 "io_failed": 0, 00:08:01.602 "io_timeout": 0, 00:08:01.602 "avg_latency_us": 42.92031833795684, 00:08:01.602 "min_latency_us": 21.016593886462882, 00:08:01.602 "max_latency_us": 1352.216593886463 00:08:01.602 } 00:08:01.602 ], 00:08:01.602 "core_count": 1 00:08:01.602 } 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63468 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63468 ']' 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63468 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63468 00:08:01.602 killing process with pid 63468 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63468' 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63468 00:08:01.602 [2024-10-25 17:49:19.853262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.602 17:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63468 00:08:01.603 [2024-10-25 17:49:19.983377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RQijKO3dET 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:02.997 ************************************ 00:08:02.997 END TEST raid_write_error_test 00:08:02.997 ************************************ 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:02.997 00:08:02.997 real 0m4.148s 00:08:02.997 user 0m4.895s 00:08:02.997 sys 0m0.530s 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.997 17:49:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.997 17:49:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:02.997 17:49:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.997 17:49:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:02.997 17:49:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.997 17:49:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.997 17:49:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.997 ************************************ 00:08:02.997 START TEST raid_state_function_test 00:08:02.997 ************************************ 00:08:02.997 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:02.997 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:02.997 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:02.997 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63606 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63606' 00:08:02.998 Process raid pid: 63606 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63606 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63606 ']' 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.998 17:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.998 [2024-10-25 17:49:21.259789] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:02.998 [2024-10-25 17:49:21.260017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.271 [2024-10-25 17:49:21.435465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.271 [2024-10-25 17:49:21.548433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.530 [2024-10-25 17:49:21.744224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.530 [2024-10-25 17:49:21.744335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.791 [2024-10-25 17:49:22.079693] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.791 [2024-10-25 17:49:22.079811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.791 [2024-10-25 17:49:22.079841] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.791 [2024-10-25 17:49:22.079853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.791 [2024-10-25 17:49:22.079860] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:03.791 [2024-10-25 17:49:22.079869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.791 "name": "Existed_Raid", 00:08:03.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.791 "strip_size_kb": 64, 00:08:03.791 "state": "configuring", 00:08:03.791 "raid_level": "raid0", 00:08:03.791 "superblock": false, 00:08:03.791 "num_base_bdevs": 3, 00:08:03.791 "num_base_bdevs_discovered": 0, 00:08:03.791 "num_base_bdevs_operational": 3, 00:08:03.791 "base_bdevs_list": [ 00:08:03.791 { 00:08:03.791 "name": "BaseBdev1", 00:08:03.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.791 "is_configured": false, 00:08:03.791 "data_offset": 0, 00:08:03.791 "data_size": 0 00:08:03.791 }, 00:08:03.791 { 00:08:03.791 "name": "BaseBdev2", 00:08:03.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.791 "is_configured": false, 00:08:03.791 "data_offset": 0, 00:08:03.791 "data_size": 0 00:08:03.791 }, 00:08:03.791 { 00:08:03.791 "name": "BaseBdev3", 00:08:03.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.791 "is_configured": false, 00:08:03.791 "data_offset": 0, 00:08:03.791 "data_size": 0 00:08:03.791 } 00:08:03.791 ] 00:08:03.791 }' 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.791 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.360 [2024-10-25 17:49:22.538866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.360 [2024-10-25 17:49:22.538940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.360 [2024-10-25 17:49:22.550856] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.360 [2024-10-25 17:49:22.550934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.360 [2024-10-25 17:49:22.550961] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.360 [2024-10-25 17:49:22.550983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.360 [2024-10-25 17:49:22.551001] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.360 [2024-10-25 17:49:22.551022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.360 [2024-10-25 17:49:22.596532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.360 BaseBdev1 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.360 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 [ 00:08:04.361 { 00:08:04.361 "name": "BaseBdev1", 00:08:04.361 "aliases": [ 00:08:04.361 "ded17c90-f201-49ce-974a-f8a6a878367c" 00:08:04.361 ], 00:08:04.361 "product_name": "Malloc disk", 00:08:04.361 "block_size": 512, 00:08:04.361 "num_blocks": 65536, 00:08:04.361 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:04.361 "assigned_rate_limits": { 00:08:04.361 "rw_ios_per_sec": 0, 00:08:04.361 "rw_mbytes_per_sec": 0, 00:08:04.361 "r_mbytes_per_sec": 0, 00:08:04.361 "w_mbytes_per_sec": 0 00:08:04.361 }, 00:08:04.361 "claimed": true, 00:08:04.361 "claim_type": "exclusive_write", 00:08:04.361 "zoned": false, 00:08:04.361 "supported_io_types": { 00:08:04.361 "read": true, 00:08:04.361 "write": true, 00:08:04.361 "unmap": true, 00:08:04.361 "flush": true, 00:08:04.361 "reset": true, 00:08:04.361 "nvme_admin": false, 00:08:04.361 "nvme_io": false, 00:08:04.361 "nvme_io_md": false, 00:08:04.361 "write_zeroes": true, 00:08:04.361 "zcopy": true, 00:08:04.361 "get_zone_info": false, 00:08:04.361 "zone_management": false, 00:08:04.361 "zone_append": false, 00:08:04.361 "compare": false, 00:08:04.361 "compare_and_write": false, 00:08:04.361 "abort": true, 00:08:04.361 "seek_hole": false, 00:08:04.361 "seek_data": false, 00:08:04.361 "copy": true, 00:08:04.361 "nvme_iov_md": false 00:08:04.361 }, 00:08:04.361 "memory_domains": [ 00:08:04.361 { 00:08:04.361 "dma_device_id": "system", 00:08:04.361 "dma_device_type": 1 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.361 "dma_device_type": 2 00:08:04.361 } 00:08:04.361 ], 00:08:04.361 "driver_specific": {} 00:08:04.361 } 00:08:04.361 ] 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.361 "name": "Existed_Raid", 00:08:04.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.361 "strip_size_kb": 64, 00:08:04.361 "state": "configuring", 00:08:04.361 "raid_level": "raid0", 00:08:04.361 "superblock": false, 00:08:04.361 "num_base_bdevs": 3, 00:08:04.361 "num_base_bdevs_discovered": 1, 00:08:04.361 "num_base_bdevs_operational": 3, 00:08:04.361 "base_bdevs_list": [ 00:08:04.361 { 00:08:04.361 "name": "BaseBdev1", 00:08:04.361 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:04.361 "is_configured": true, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 65536 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "name": "BaseBdev2", 00:08:04.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.361 "is_configured": false, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 0 00:08:04.361 }, 00:08:04.361 { 00:08:04.361 "name": "BaseBdev3", 00:08:04.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.361 "is_configured": false, 00:08:04.361 "data_offset": 0, 00:08:04.361 "data_size": 0 00:08:04.361 } 00:08:04.361 ] 00:08:04.361 }' 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.361 17:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 [2024-10-25 17:49:23.047776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.621 [2024-10-25 17:49:23.047819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.621 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.881 [2024-10-25 17:49:23.059801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.881 [2024-10-25 17:49:23.061615] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.881 [2024-10-25 17:49:23.061658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.881 [2024-10-25 17:49:23.061667] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.881 [2024-10-25 17:49:23.061675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.881 "name": "Existed_Raid", 00:08:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.881 "strip_size_kb": 64, 00:08:04.881 "state": "configuring", 00:08:04.881 "raid_level": "raid0", 00:08:04.881 "superblock": false, 00:08:04.881 "num_base_bdevs": 3, 00:08:04.881 "num_base_bdevs_discovered": 1, 00:08:04.881 "num_base_bdevs_operational": 3, 00:08:04.881 "base_bdevs_list": [ 00:08:04.881 { 00:08:04.881 "name": "BaseBdev1", 00:08:04.881 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:04.881 "is_configured": true, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 65536 00:08:04.881 }, 00:08:04.881 { 00:08:04.881 "name": "BaseBdev2", 00:08:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.881 "is_configured": false, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 0 00:08:04.881 }, 00:08:04.881 { 00:08:04.881 "name": "BaseBdev3", 00:08:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.881 "is_configured": false, 00:08:04.881 "data_offset": 0, 00:08:04.881 "data_size": 0 00:08:04.881 } 00:08:04.881 ] 00:08:04.881 }' 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.881 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 [2024-10-25 17:49:23.528054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.141 BaseBdev2 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.141 [ 00:08:05.141 { 00:08:05.141 "name": "BaseBdev2", 00:08:05.141 "aliases": [ 00:08:05.141 "921b6252-3a0e-49ae-9565-f2293038523c" 00:08:05.141 ], 00:08:05.141 "product_name": "Malloc disk", 00:08:05.141 "block_size": 512, 00:08:05.141 "num_blocks": 65536, 00:08:05.141 "uuid": "921b6252-3a0e-49ae-9565-f2293038523c", 00:08:05.141 "assigned_rate_limits": { 00:08:05.141 "rw_ios_per_sec": 0, 00:08:05.141 "rw_mbytes_per_sec": 0, 00:08:05.141 "r_mbytes_per_sec": 0, 00:08:05.141 "w_mbytes_per_sec": 0 00:08:05.141 }, 00:08:05.141 "claimed": true, 00:08:05.141 "claim_type": "exclusive_write", 00:08:05.141 "zoned": false, 00:08:05.141 "supported_io_types": { 00:08:05.141 "read": true, 00:08:05.141 "write": true, 00:08:05.141 "unmap": true, 00:08:05.141 "flush": true, 00:08:05.141 "reset": true, 00:08:05.141 "nvme_admin": false, 00:08:05.141 "nvme_io": false, 00:08:05.141 "nvme_io_md": false, 00:08:05.141 "write_zeroes": true, 00:08:05.141 "zcopy": true, 00:08:05.141 "get_zone_info": false, 00:08:05.141 "zone_management": false, 00:08:05.141 "zone_append": false, 00:08:05.141 "compare": false, 00:08:05.141 "compare_and_write": false, 00:08:05.141 "abort": true, 00:08:05.141 "seek_hole": false, 00:08:05.141 "seek_data": false, 00:08:05.141 "copy": true, 00:08:05.141 "nvme_iov_md": false 00:08:05.141 }, 00:08:05.141 "memory_domains": [ 00:08:05.141 { 00:08:05.141 "dma_device_id": "system", 00:08:05.141 "dma_device_type": 1 00:08:05.141 }, 00:08:05.141 { 00:08:05.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.141 "dma_device_type": 2 00:08:05.141 } 00:08:05.141 ], 00:08:05.141 "driver_specific": {} 00:08:05.141 } 00:08:05.141 ] 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.141 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.142 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.402 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.402 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.402 "name": "Existed_Raid", 00:08:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.402 "strip_size_kb": 64, 00:08:05.402 "state": "configuring", 00:08:05.402 "raid_level": "raid0", 00:08:05.402 "superblock": false, 00:08:05.402 "num_base_bdevs": 3, 00:08:05.402 "num_base_bdevs_discovered": 2, 00:08:05.402 "num_base_bdevs_operational": 3, 00:08:05.402 "base_bdevs_list": [ 00:08:05.402 { 00:08:05.402 "name": "BaseBdev1", 00:08:05.402 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:05.402 "is_configured": true, 00:08:05.402 "data_offset": 0, 00:08:05.402 "data_size": 65536 00:08:05.402 }, 00:08:05.402 { 00:08:05.402 "name": "BaseBdev2", 00:08:05.402 "uuid": "921b6252-3a0e-49ae-9565-f2293038523c", 00:08:05.402 "is_configured": true, 00:08:05.402 "data_offset": 0, 00:08:05.402 "data_size": 65536 00:08:05.402 }, 00:08:05.402 { 00:08:05.402 "name": "BaseBdev3", 00:08:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.402 "is_configured": false, 00:08:05.402 "data_offset": 0, 00:08:05.402 "data_size": 0 00:08:05.402 } 00:08:05.402 ] 00:08:05.402 }' 00:08:05.402 17:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.402 17:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:05.662 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.662 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.922 [2024-10-25 17:49:24.099713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.922 [2024-10-25 17:49:24.099753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.922 [2024-10-25 17:49:24.099767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:05.922 [2024-10-25 17:49:24.100088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:05.922 [2024-10-25 17:49:24.100251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.922 [2024-10-25 17:49:24.100260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:05.922 [2024-10-25 17:49:24.100549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.922 BaseBdev3 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.922 [ 00:08:05.922 { 00:08:05.922 "name": "BaseBdev3", 00:08:05.922 "aliases": [ 00:08:05.922 "bf7945cc-b646-449b-b12c-a47b01321018" 00:08:05.922 ], 00:08:05.922 "product_name": "Malloc disk", 00:08:05.922 "block_size": 512, 00:08:05.922 "num_blocks": 65536, 00:08:05.922 "uuid": "bf7945cc-b646-449b-b12c-a47b01321018", 00:08:05.922 "assigned_rate_limits": { 00:08:05.922 "rw_ios_per_sec": 0, 00:08:05.922 "rw_mbytes_per_sec": 0, 00:08:05.922 "r_mbytes_per_sec": 0, 00:08:05.922 "w_mbytes_per_sec": 0 00:08:05.922 }, 00:08:05.922 "claimed": true, 00:08:05.922 "claim_type": "exclusive_write", 00:08:05.922 "zoned": false, 00:08:05.922 "supported_io_types": { 00:08:05.922 "read": true, 00:08:05.922 "write": true, 00:08:05.922 "unmap": true, 00:08:05.922 "flush": true, 00:08:05.922 "reset": true, 00:08:05.922 "nvme_admin": false, 00:08:05.922 "nvme_io": false, 00:08:05.922 "nvme_io_md": false, 00:08:05.922 "write_zeroes": true, 00:08:05.922 "zcopy": true, 00:08:05.922 "get_zone_info": false, 00:08:05.922 "zone_management": false, 00:08:05.922 "zone_append": false, 00:08:05.922 "compare": false, 00:08:05.922 "compare_and_write": false, 00:08:05.922 "abort": true, 00:08:05.922 "seek_hole": false, 00:08:05.922 "seek_data": false, 00:08:05.922 "copy": true, 00:08:05.922 "nvme_iov_md": false 00:08:05.922 }, 00:08:05.922 "memory_domains": [ 00:08:05.922 { 00:08:05.922 "dma_device_id": "system", 00:08:05.922 "dma_device_type": 1 00:08:05.922 }, 00:08:05.922 { 00:08:05.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.922 "dma_device_type": 2 00:08:05.922 } 00:08:05.922 ], 00:08:05.922 "driver_specific": {} 00:08:05.922 } 00:08:05.922 ] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.922 "name": "Existed_Raid", 00:08:05.922 "uuid": "2dd8eece-b083-4d95-8672-aad8e53de571", 00:08:05.922 "strip_size_kb": 64, 00:08:05.922 "state": "online", 00:08:05.922 "raid_level": "raid0", 00:08:05.922 "superblock": false, 00:08:05.922 "num_base_bdevs": 3, 00:08:05.922 "num_base_bdevs_discovered": 3, 00:08:05.922 "num_base_bdevs_operational": 3, 00:08:05.922 "base_bdevs_list": [ 00:08:05.922 { 00:08:05.922 "name": "BaseBdev1", 00:08:05.922 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:05.922 "is_configured": true, 00:08:05.922 "data_offset": 0, 00:08:05.922 "data_size": 65536 00:08:05.922 }, 00:08:05.922 { 00:08:05.922 "name": "BaseBdev2", 00:08:05.922 "uuid": "921b6252-3a0e-49ae-9565-f2293038523c", 00:08:05.922 "is_configured": true, 00:08:05.922 "data_offset": 0, 00:08:05.922 "data_size": 65536 00:08:05.922 }, 00:08:05.922 { 00:08:05.922 "name": "BaseBdev3", 00:08:05.922 "uuid": "bf7945cc-b646-449b-b12c-a47b01321018", 00:08:05.922 "is_configured": true, 00:08:05.922 "data_offset": 0, 00:08:05.922 "data_size": 65536 00:08:05.922 } 00:08:05.922 ] 00:08:05.922 }' 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.922 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.182 [2024-10-25 17:49:24.575204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.182 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.182 "name": "Existed_Raid", 00:08:06.182 "aliases": [ 00:08:06.182 "2dd8eece-b083-4d95-8672-aad8e53de571" 00:08:06.182 ], 00:08:06.182 "product_name": "Raid Volume", 00:08:06.182 "block_size": 512, 00:08:06.182 "num_blocks": 196608, 00:08:06.182 "uuid": "2dd8eece-b083-4d95-8672-aad8e53de571", 00:08:06.182 "assigned_rate_limits": { 00:08:06.182 "rw_ios_per_sec": 0, 00:08:06.182 "rw_mbytes_per_sec": 0, 00:08:06.182 "r_mbytes_per_sec": 0, 00:08:06.182 "w_mbytes_per_sec": 0 00:08:06.182 }, 00:08:06.182 "claimed": false, 00:08:06.182 "zoned": false, 00:08:06.182 "supported_io_types": { 00:08:06.182 "read": true, 00:08:06.182 "write": true, 00:08:06.182 "unmap": true, 00:08:06.182 "flush": true, 00:08:06.182 "reset": true, 00:08:06.182 "nvme_admin": false, 00:08:06.182 "nvme_io": false, 00:08:06.182 "nvme_io_md": false, 00:08:06.182 "write_zeroes": true, 00:08:06.182 "zcopy": false, 00:08:06.182 "get_zone_info": false, 00:08:06.182 "zone_management": false, 00:08:06.182 "zone_append": false, 00:08:06.182 "compare": false, 00:08:06.182 "compare_and_write": false, 00:08:06.182 "abort": false, 00:08:06.182 "seek_hole": false, 00:08:06.183 "seek_data": false, 00:08:06.183 "copy": false, 00:08:06.183 "nvme_iov_md": false 00:08:06.183 }, 00:08:06.183 "memory_domains": [ 00:08:06.183 { 00:08:06.183 "dma_device_id": "system", 00:08:06.183 "dma_device_type": 1 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.183 "dma_device_type": 2 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "dma_device_id": "system", 00:08:06.183 "dma_device_type": 1 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.183 "dma_device_type": 2 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "dma_device_id": "system", 00:08:06.183 "dma_device_type": 1 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.183 "dma_device_type": 2 00:08:06.183 } 00:08:06.183 ], 00:08:06.183 "driver_specific": { 00:08:06.183 "raid": { 00:08:06.183 "uuid": "2dd8eece-b083-4d95-8672-aad8e53de571", 00:08:06.183 "strip_size_kb": 64, 00:08:06.183 "state": "online", 00:08:06.183 "raid_level": "raid0", 00:08:06.183 "superblock": false, 00:08:06.183 "num_base_bdevs": 3, 00:08:06.183 "num_base_bdevs_discovered": 3, 00:08:06.183 "num_base_bdevs_operational": 3, 00:08:06.183 "base_bdevs_list": [ 00:08:06.183 { 00:08:06.183 "name": "BaseBdev1", 00:08:06.183 "uuid": "ded17c90-f201-49ce-974a-f8a6a878367c", 00:08:06.183 "is_configured": true, 00:08:06.183 "data_offset": 0, 00:08:06.183 "data_size": 65536 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "name": "BaseBdev2", 00:08:06.183 "uuid": "921b6252-3a0e-49ae-9565-f2293038523c", 00:08:06.183 "is_configured": true, 00:08:06.183 "data_offset": 0, 00:08:06.183 "data_size": 65536 00:08:06.183 }, 00:08:06.183 { 00:08:06.183 "name": "BaseBdev3", 00:08:06.183 "uuid": "bf7945cc-b646-449b-b12c-a47b01321018", 00:08:06.183 "is_configured": true, 00:08:06.183 "data_offset": 0, 00:08:06.183 "data_size": 65536 00:08:06.183 } 00:08:06.183 ] 00:08:06.183 } 00:08:06.183 } 00:08:06.183 }' 00:08:06.183 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.442 BaseBdev2 00:08:06.442 BaseBdev3' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.442 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 [2024-10-25 17:49:24.830511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.442 [2024-10-25 17:49:24.830577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.442 [2024-10-25 17:49:24.830645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.702 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.702 "name": "Existed_Raid", 00:08:06.702 "uuid": "2dd8eece-b083-4d95-8672-aad8e53de571", 00:08:06.702 "strip_size_kb": 64, 00:08:06.702 "state": "offline", 00:08:06.702 "raid_level": "raid0", 00:08:06.702 "superblock": false, 00:08:06.703 "num_base_bdevs": 3, 00:08:06.703 "num_base_bdevs_discovered": 2, 00:08:06.703 "num_base_bdevs_operational": 2, 00:08:06.703 "base_bdevs_list": [ 00:08:06.703 { 00:08:06.703 "name": null, 00:08:06.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.703 "is_configured": false, 00:08:06.703 "data_offset": 0, 00:08:06.703 "data_size": 65536 00:08:06.703 }, 00:08:06.703 { 00:08:06.703 "name": "BaseBdev2", 00:08:06.703 "uuid": "921b6252-3a0e-49ae-9565-f2293038523c", 00:08:06.703 "is_configured": true, 00:08:06.703 "data_offset": 0, 00:08:06.703 "data_size": 65536 00:08:06.703 }, 00:08:06.703 { 00:08:06.703 "name": "BaseBdev3", 00:08:06.703 "uuid": "bf7945cc-b646-449b-b12c-a47b01321018", 00:08:06.703 "is_configured": true, 00:08:06.703 "data_offset": 0, 00:08:06.703 "data_size": 65536 00:08:06.703 } 00:08:06.703 ] 00:08:06.703 }' 00:08:06.703 17:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.703 17:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.962 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.222 [2024-10-25 17:49:25.398770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.222 [2024-10-25 17:49:25.544894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:07.222 [2024-10-25 17:49:25.544948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.222 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 BaseBdev2 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 [ 00:08:07.483 { 00:08:07.483 "name": "BaseBdev2", 00:08:07.483 "aliases": [ 00:08:07.483 "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23" 00:08:07.483 ], 00:08:07.483 "product_name": "Malloc disk", 00:08:07.483 "block_size": 512, 00:08:07.483 "num_blocks": 65536, 00:08:07.483 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:07.483 "assigned_rate_limits": { 00:08:07.483 "rw_ios_per_sec": 0, 00:08:07.483 "rw_mbytes_per_sec": 0, 00:08:07.483 "r_mbytes_per_sec": 0, 00:08:07.483 "w_mbytes_per_sec": 0 00:08:07.483 }, 00:08:07.483 "claimed": false, 00:08:07.483 "zoned": false, 00:08:07.483 "supported_io_types": { 00:08:07.483 "read": true, 00:08:07.483 "write": true, 00:08:07.483 "unmap": true, 00:08:07.483 "flush": true, 00:08:07.483 "reset": true, 00:08:07.483 "nvme_admin": false, 00:08:07.483 "nvme_io": false, 00:08:07.483 "nvme_io_md": false, 00:08:07.483 "write_zeroes": true, 00:08:07.483 "zcopy": true, 00:08:07.483 "get_zone_info": false, 00:08:07.483 "zone_management": false, 00:08:07.483 "zone_append": false, 00:08:07.483 "compare": false, 00:08:07.483 "compare_and_write": false, 00:08:07.483 "abort": true, 00:08:07.483 "seek_hole": false, 00:08:07.483 "seek_data": false, 00:08:07.483 "copy": true, 00:08:07.483 "nvme_iov_md": false 00:08:07.483 }, 00:08:07.483 "memory_domains": [ 00:08:07.483 { 00:08:07.483 "dma_device_id": "system", 00:08:07.483 "dma_device_type": 1 00:08:07.483 }, 00:08:07.483 { 00:08:07.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.483 "dma_device_type": 2 00:08:07.483 } 00:08:07.483 ], 00:08:07.483 "driver_specific": {} 00:08:07.483 } 00:08:07.483 ] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 BaseBdev3 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.483 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.483 [ 00:08:07.483 { 00:08:07.483 "name": "BaseBdev3", 00:08:07.483 "aliases": [ 00:08:07.483 "2b34d8b6-27f3-41ec-a04a-3e758e916f58" 00:08:07.483 ], 00:08:07.484 "product_name": "Malloc disk", 00:08:07.484 "block_size": 512, 00:08:07.484 "num_blocks": 65536, 00:08:07.484 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:07.484 "assigned_rate_limits": { 00:08:07.484 "rw_ios_per_sec": 0, 00:08:07.484 "rw_mbytes_per_sec": 0, 00:08:07.484 "r_mbytes_per_sec": 0, 00:08:07.484 "w_mbytes_per_sec": 0 00:08:07.484 }, 00:08:07.484 "claimed": false, 00:08:07.484 "zoned": false, 00:08:07.484 "supported_io_types": { 00:08:07.484 "read": true, 00:08:07.484 "write": true, 00:08:07.484 "unmap": true, 00:08:07.484 "flush": true, 00:08:07.484 "reset": true, 00:08:07.484 "nvme_admin": false, 00:08:07.484 "nvme_io": false, 00:08:07.484 "nvme_io_md": false, 00:08:07.484 "write_zeroes": true, 00:08:07.484 "zcopy": true, 00:08:07.484 "get_zone_info": false, 00:08:07.484 "zone_management": false, 00:08:07.484 "zone_append": false, 00:08:07.484 "compare": false, 00:08:07.484 "compare_and_write": false, 00:08:07.484 "abort": true, 00:08:07.484 "seek_hole": false, 00:08:07.484 "seek_data": false, 00:08:07.484 "copy": true, 00:08:07.484 "nvme_iov_md": false 00:08:07.484 }, 00:08:07.484 "memory_domains": [ 00:08:07.484 { 00:08:07.484 "dma_device_id": "system", 00:08:07.484 "dma_device_type": 1 00:08:07.484 }, 00:08:07.484 { 00:08:07.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.484 "dma_device_type": 2 00:08:07.484 } 00:08:07.484 ], 00:08:07.484 "driver_specific": {} 00:08:07.484 } 00:08:07.484 ] 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.484 [2024-10-25 17:49:25.829288] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.484 [2024-10-25 17:49:25.829372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.484 [2024-10-25 17:49:25.829412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.484 [2024-10-25 17:49:25.831178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.484 "name": "Existed_Raid", 00:08:07.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.484 "strip_size_kb": 64, 00:08:07.484 "state": "configuring", 00:08:07.484 "raid_level": "raid0", 00:08:07.484 "superblock": false, 00:08:07.484 "num_base_bdevs": 3, 00:08:07.484 "num_base_bdevs_discovered": 2, 00:08:07.484 "num_base_bdevs_operational": 3, 00:08:07.484 "base_bdevs_list": [ 00:08:07.484 { 00:08:07.484 "name": "BaseBdev1", 00:08:07.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.484 "is_configured": false, 00:08:07.484 "data_offset": 0, 00:08:07.484 "data_size": 0 00:08:07.484 }, 00:08:07.484 { 00:08:07.484 "name": "BaseBdev2", 00:08:07.484 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:07.484 "is_configured": true, 00:08:07.484 "data_offset": 0, 00:08:07.484 "data_size": 65536 00:08:07.484 }, 00:08:07.484 { 00:08:07.484 "name": "BaseBdev3", 00:08:07.484 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:07.484 "is_configured": true, 00:08:07.484 "data_offset": 0, 00:08:07.484 "data_size": 65536 00:08:07.484 } 00:08:07.484 ] 00:08:07.484 }' 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.484 17:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.055 [2024-10-25 17:49:26.216592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.055 "name": "Existed_Raid", 00:08:08.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.055 "strip_size_kb": 64, 00:08:08.055 "state": "configuring", 00:08:08.055 "raid_level": "raid0", 00:08:08.055 "superblock": false, 00:08:08.055 "num_base_bdevs": 3, 00:08:08.055 "num_base_bdevs_discovered": 1, 00:08:08.055 "num_base_bdevs_operational": 3, 00:08:08.055 "base_bdevs_list": [ 00:08:08.055 { 00:08:08.055 "name": "BaseBdev1", 00:08:08.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.055 "is_configured": false, 00:08:08.055 "data_offset": 0, 00:08:08.055 "data_size": 0 00:08:08.055 }, 00:08:08.055 { 00:08:08.055 "name": null, 00:08:08.055 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:08.055 "is_configured": false, 00:08:08.055 "data_offset": 0, 00:08:08.055 "data_size": 65536 00:08:08.055 }, 00:08:08.055 { 00:08:08.055 "name": "BaseBdev3", 00:08:08.055 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:08.055 "is_configured": true, 00:08:08.055 "data_offset": 0, 00:08:08.055 "data_size": 65536 00:08:08.055 } 00:08:08.055 ] 00:08:08.055 }' 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.055 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.315 [2024-10-25 17:49:26.717617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.315 BaseBdev1 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.315 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.315 [ 00:08:08.315 { 00:08:08.315 "name": "BaseBdev1", 00:08:08.315 "aliases": [ 00:08:08.315 "8079f1d1-257f-4434-9fef-a1b6369144d8" 00:08:08.315 ], 00:08:08.315 "product_name": "Malloc disk", 00:08:08.315 "block_size": 512, 00:08:08.315 "num_blocks": 65536, 00:08:08.315 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:08.315 "assigned_rate_limits": { 00:08:08.315 "rw_ios_per_sec": 0, 00:08:08.315 "rw_mbytes_per_sec": 0, 00:08:08.315 "r_mbytes_per_sec": 0, 00:08:08.315 "w_mbytes_per_sec": 0 00:08:08.315 }, 00:08:08.315 "claimed": true, 00:08:08.315 "claim_type": "exclusive_write", 00:08:08.315 "zoned": false, 00:08:08.315 "supported_io_types": { 00:08:08.315 "read": true, 00:08:08.315 "write": true, 00:08:08.315 "unmap": true, 00:08:08.315 "flush": true, 00:08:08.315 "reset": true, 00:08:08.315 "nvme_admin": false, 00:08:08.315 "nvme_io": false, 00:08:08.315 "nvme_io_md": false, 00:08:08.315 "write_zeroes": true, 00:08:08.315 "zcopy": true, 00:08:08.315 "get_zone_info": false, 00:08:08.315 "zone_management": false, 00:08:08.315 "zone_append": false, 00:08:08.575 "compare": false, 00:08:08.575 "compare_and_write": false, 00:08:08.575 "abort": true, 00:08:08.575 "seek_hole": false, 00:08:08.575 "seek_data": false, 00:08:08.575 "copy": true, 00:08:08.575 "nvme_iov_md": false 00:08:08.575 }, 00:08:08.575 "memory_domains": [ 00:08:08.575 { 00:08:08.575 "dma_device_id": "system", 00:08:08.575 "dma_device_type": 1 00:08:08.575 }, 00:08:08.575 { 00:08:08.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.575 "dma_device_type": 2 00:08:08.575 } 00:08:08.575 ], 00:08:08.575 "driver_specific": {} 00:08:08.575 } 00:08:08.575 ] 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.575 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.575 "name": "Existed_Raid", 00:08:08.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.575 "strip_size_kb": 64, 00:08:08.575 "state": "configuring", 00:08:08.575 "raid_level": "raid0", 00:08:08.575 "superblock": false, 00:08:08.575 "num_base_bdevs": 3, 00:08:08.575 "num_base_bdevs_discovered": 2, 00:08:08.575 "num_base_bdevs_operational": 3, 00:08:08.575 "base_bdevs_list": [ 00:08:08.575 { 00:08:08.575 "name": "BaseBdev1", 00:08:08.575 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:08.575 "is_configured": true, 00:08:08.575 "data_offset": 0, 00:08:08.575 "data_size": 65536 00:08:08.575 }, 00:08:08.575 { 00:08:08.575 "name": null, 00:08:08.575 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:08.575 "is_configured": false, 00:08:08.575 "data_offset": 0, 00:08:08.575 "data_size": 65536 00:08:08.575 }, 00:08:08.575 { 00:08:08.575 "name": "BaseBdev3", 00:08:08.576 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:08.576 "is_configured": true, 00:08:08.576 "data_offset": 0, 00:08:08.576 "data_size": 65536 00:08:08.576 } 00:08:08.576 ] 00:08:08.576 }' 00:08:08.576 17:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.576 17:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.836 [2024-10-25 17:49:27.184845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.836 "name": "Existed_Raid", 00:08:08.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.836 "strip_size_kb": 64, 00:08:08.836 "state": "configuring", 00:08:08.836 "raid_level": "raid0", 00:08:08.836 "superblock": false, 00:08:08.836 "num_base_bdevs": 3, 00:08:08.836 "num_base_bdevs_discovered": 1, 00:08:08.836 "num_base_bdevs_operational": 3, 00:08:08.836 "base_bdevs_list": [ 00:08:08.836 { 00:08:08.836 "name": "BaseBdev1", 00:08:08.836 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:08.836 "is_configured": true, 00:08:08.836 "data_offset": 0, 00:08:08.836 "data_size": 65536 00:08:08.836 }, 00:08:08.836 { 00:08:08.836 "name": null, 00:08:08.836 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:08.836 "is_configured": false, 00:08:08.836 "data_offset": 0, 00:08:08.836 "data_size": 65536 00:08:08.836 }, 00:08:08.836 { 00:08:08.836 "name": null, 00:08:08.836 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:08.836 "is_configured": false, 00:08:08.836 "data_offset": 0, 00:08:08.836 "data_size": 65536 00:08:08.836 } 00:08:08.836 ] 00:08:08.836 }' 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.836 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.406 [2024-10-25 17:49:27.628113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.406 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.407 "name": "Existed_Raid", 00:08:09.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.407 "strip_size_kb": 64, 00:08:09.407 "state": "configuring", 00:08:09.407 "raid_level": "raid0", 00:08:09.407 "superblock": false, 00:08:09.407 "num_base_bdevs": 3, 00:08:09.407 "num_base_bdevs_discovered": 2, 00:08:09.407 "num_base_bdevs_operational": 3, 00:08:09.407 "base_bdevs_list": [ 00:08:09.407 { 00:08:09.407 "name": "BaseBdev1", 00:08:09.407 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:09.407 "is_configured": true, 00:08:09.407 "data_offset": 0, 00:08:09.407 "data_size": 65536 00:08:09.407 }, 00:08:09.407 { 00:08:09.407 "name": null, 00:08:09.407 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:09.407 "is_configured": false, 00:08:09.407 "data_offset": 0, 00:08:09.407 "data_size": 65536 00:08:09.407 }, 00:08:09.407 { 00:08:09.407 "name": "BaseBdev3", 00:08:09.407 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:09.407 "is_configured": true, 00:08:09.407 "data_offset": 0, 00:08:09.407 "data_size": 65536 00:08:09.407 } 00:08:09.407 ] 00:08:09.407 }' 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.407 17:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.666 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.926 [2024-10-25 17:49:28.107331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.926 "name": "Existed_Raid", 00:08:09.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.926 "strip_size_kb": 64, 00:08:09.926 "state": "configuring", 00:08:09.926 "raid_level": "raid0", 00:08:09.926 "superblock": false, 00:08:09.926 "num_base_bdevs": 3, 00:08:09.926 "num_base_bdevs_discovered": 1, 00:08:09.926 "num_base_bdevs_operational": 3, 00:08:09.926 "base_bdevs_list": [ 00:08:09.926 { 00:08:09.926 "name": null, 00:08:09.926 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:09.926 "is_configured": false, 00:08:09.926 "data_offset": 0, 00:08:09.926 "data_size": 65536 00:08:09.926 }, 00:08:09.926 { 00:08:09.926 "name": null, 00:08:09.926 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:09.926 "is_configured": false, 00:08:09.926 "data_offset": 0, 00:08:09.926 "data_size": 65536 00:08:09.926 }, 00:08:09.926 { 00:08:09.926 "name": "BaseBdev3", 00:08:09.926 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:09.926 "is_configured": true, 00:08:09.926 "data_offset": 0, 00:08:09.926 "data_size": 65536 00:08:09.926 } 00:08:09.926 ] 00:08:09.926 }' 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.926 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.496 [2024-10-25 17:49:28.695182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.496 "name": "Existed_Raid", 00:08:10.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.496 "strip_size_kb": 64, 00:08:10.496 "state": "configuring", 00:08:10.496 "raid_level": "raid0", 00:08:10.496 "superblock": false, 00:08:10.496 "num_base_bdevs": 3, 00:08:10.496 "num_base_bdevs_discovered": 2, 00:08:10.496 "num_base_bdevs_operational": 3, 00:08:10.496 "base_bdevs_list": [ 00:08:10.496 { 00:08:10.496 "name": null, 00:08:10.496 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:10.496 "is_configured": false, 00:08:10.496 "data_offset": 0, 00:08:10.496 "data_size": 65536 00:08:10.496 }, 00:08:10.496 { 00:08:10.496 "name": "BaseBdev2", 00:08:10.496 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:10.496 "is_configured": true, 00:08:10.496 "data_offset": 0, 00:08:10.496 "data_size": 65536 00:08:10.496 }, 00:08:10.496 { 00:08:10.496 "name": "BaseBdev3", 00:08:10.496 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:10.496 "is_configured": true, 00:08:10.496 "data_offset": 0, 00:08:10.496 "data_size": 65536 00:08:10.496 } 00:08:10.496 ] 00:08:10.496 }' 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.496 17:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:10.756 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8079f1d1-257f-4434-9fef-a1b6369144d8 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.016 [2024-10-25 17:49:29.233571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:11.016 [2024-10-25 17:49:29.233665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:11.016 [2024-10-25 17:49:29.233679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.016 [2024-10-25 17:49:29.233959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:11.016 [2024-10-25 17:49:29.234124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:11.016 [2024-10-25 17:49:29.234133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:11.016 [2024-10-25 17:49:29.234381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.016 NewBaseBdev 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.016 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.016 [ 00:08:11.016 { 00:08:11.016 "name": "NewBaseBdev", 00:08:11.016 "aliases": [ 00:08:11.016 "8079f1d1-257f-4434-9fef-a1b6369144d8" 00:08:11.016 ], 00:08:11.016 "product_name": "Malloc disk", 00:08:11.016 "block_size": 512, 00:08:11.016 "num_blocks": 65536, 00:08:11.016 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:11.016 "assigned_rate_limits": { 00:08:11.016 "rw_ios_per_sec": 0, 00:08:11.016 "rw_mbytes_per_sec": 0, 00:08:11.016 "r_mbytes_per_sec": 0, 00:08:11.016 "w_mbytes_per_sec": 0 00:08:11.016 }, 00:08:11.016 "claimed": true, 00:08:11.016 "claim_type": "exclusive_write", 00:08:11.016 "zoned": false, 00:08:11.016 "supported_io_types": { 00:08:11.017 "read": true, 00:08:11.017 "write": true, 00:08:11.017 "unmap": true, 00:08:11.017 "flush": true, 00:08:11.017 "reset": true, 00:08:11.017 "nvme_admin": false, 00:08:11.017 "nvme_io": false, 00:08:11.017 "nvme_io_md": false, 00:08:11.017 "write_zeroes": true, 00:08:11.017 "zcopy": true, 00:08:11.017 "get_zone_info": false, 00:08:11.017 "zone_management": false, 00:08:11.017 "zone_append": false, 00:08:11.017 "compare": false, 00:08:11.017 "compare_and_write": false, 00:08:11.017 "abort": true, 00:08:11.017 "seek_hole": false, 00:08:11.017 "seek_data": false, 00:08:11.017 "copy": true, 00:08:11.017 "nvme_iov_md": false 00:08:11.017 }, 00:08:11.017 "memory_domains": [ 00:08:11.017 { 00:08:11.017 "dma_device_id": "system", 00:08:11.017 "dma_device_type": 1 00:08:11.017 }, 00:08:11.017 { 00:08:11.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.017 "dma_device_type": 2 00:08:11.017 } 00:08:11.017 ], 00:08:11.017 "driver_specific": {} 00:08:11.017 } 00:08:11.017 ] 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.017 "name": "Existed_Raid", 00:08:11.017 "uuid": "896bc162-573b-4804-b7c5-aec517680986", 00:08:11.017 "strip_size_kb": 64, 00:08:11.017 "state": "online", 00:08:11.017 "raid_level": "raid0", 00:08:11.017 "superblock": false, 00:08:11.017 "num_base_bdevs": 3, 00:08:11.017 "num_base_bdevs_discovered": 3, 00:08:11.017 "num_base_bdevs_operational": 3, 00:08:11.017 "base_bdevs_list": [ 00:08:11.017 { 00:08:11.017 "name": "NewBaseBdev", 00:08:11.017 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:11.017 "is_configured": true, 00:08:11.017 "data_offset": 0, 00:08:11.017 "data_size": 65536 00:08:11.017 }, 00:08:11.017 { 00:08:11.017 "name": "BaseBdev2", 00:08:11.017 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:11.017 "is_configured": true, 00:08:11.017 "data_offset": 0, 00:08:11.017 "data_size": 65536 00:08:11.017 }, 00:08:11.017 { 00:08:11.017 "name": "BaseBdev3", 00:08:11.017 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:11.017 "is_configured": true, 00:08:11.017 "data_offset": 0, 00:08:11.017 "data_size": 65536 00:08:11.017 } 00:08:11.017 ] 00:08:11.017 }' 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.017 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.277 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.277 [2024-10-25 17:49:29.705151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.537 "name": "Existed_Raid", 00:08:11.537 "aliases": [ 00:08:11.537 "896bc162-573b-4804-b7c5-aec517680986" 00:08:11.537 ], 00:08:11.537 "product_name": "Raid Volume", 00:08:11.537 "block_size": 512, 00:08:11.537 "num_blocks": 196608, 00:08:11.537 "uuid": "896bc162-573b-4804-b7c5-aec517680986", 00:08:11.537 "assigned_rate_limits": { 00:08:11.537 "rw_ios_per_sec": 0, 00:08:11.537 "rw_mbytes_per_sec": 0, 00:08:11.537 "r_mbytes_per_sec": 0, 00:08:11.537 "w_mbytes_per_sec": 0 00:08:11.537 }, 00:08:11.537 "claimed": false, 00:08:11.537 "zoned": false, 00:08:11.537 "supported_io_types": { 00:08:11.537 "read": true, 00:08:11.537 "write": true, 00:08:11.537 "unmap": true, 00:08:11.537 "flush": true, 00:08:11.537 "reset": true, 00:08:11.537 "nvme_admin": false, 00:08:11.537 "nvme_io": false, 00:08:11.537 "nvme_io_md": false, 00:08:11.537 "write_zeroes": true, 00:08:11.537 "zcopy": false, 00:08:11.537 "get_zone_info": false, 00:08:11.537 "zone_management": false, 00:08:11.537 "zone_append": false, 00:08:11.537 "compare": false, 00:08:11.537 "compare_and_write": false, 00:08:11.537 "abort": false, 00:08:11.537 "seek_hole": false, 00:08:11.537 "seek_data": false, 00:08:11.537 "copy": false, 00:08:11.537 "nvme_iov_md": false 00:08:11.537 }, 00:08:11.537 "memory_domains": [ 00:08:11.537 { 00:08:11.537 "dma_device_id": "system", 00:08:11.537 "dma_device_type": 1 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.537 "dma_device_type": 2 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "dma_device_id": "system", 00:08:11.537 "dma_device_type": 1 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.537 "dma_device_type": 2 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "dma_device_id": "system", 00:08:11.537 "dma_device_type": 1 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.537 "dma_device_type": 2 00:08:11.537 } 00:08:11.537 ], 00:08:11.537 "driver_specific": { 00:08:11.537 "raid": { 00:08:11.537 "uuid": "896bc162-573b-4804-b7c5-aec517680986", 00:08:11.537 "strip_size_kb": 64, 00:08:11.537 "state": "online", 00:08:11.537 "raid_level": "raid0", 00:08:11.537 "superblock": false, 00:08:11.537 "num_base_bdevs": 3, 00:08:11.537 "num_base_bdevs_discovered": 3, 00:08:11.537 "num_base_bdevs_operational": 3, 00:08:11.537 "base_bdevs_list": [ 00:08:11.537 { 00:08:11.537 "name": "NewBaseBdev", 00:08:11.537 "uuid": "8079f1d1-257f-4434-9fef-a1b6369144d8", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 0, 00:08:11.537 "data_size": 65536 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "name": "BaseBdev2", 00:08:11.537 "uuid": "e6d356bf-dd01-4b51-8ed6-ceaf4c5fff23", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 0, 00:08:11.537 "data_size": 65536 00:08:11.537 }, 00:08:11.537 { 00:08:11.537 "name": "BaseBdev3", 00:08:11.537 "uuid": "2b34d8b6-27f3-41ec-a04a-3e758e916f58", 00:08:11.537 "is_configured": true, 00:08:11.537 "data_offset": 0, 00:08:11.537 "data_size": 65536 00:08:11.537 } 00:08:11.537 ] 00:08:11.537 } 00:08:11.537 } 00:08:11.537 }' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:11.537 BaseBdev2 00:08:11.537 BaseBdev3' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.537 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.796 [2024-10-25 17:49:29.984336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.796 [2024-10-25 17:49:29.984406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.796 [2024-10-25 17:49:29.984498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.796 [2024-10-25 17:49:29.984588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.796 [2024-10-25 17:49:29.984624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63606 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63606 ']' 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63606 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.796 17:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63606 00:08:11.796 killing process with pid 63606 00:08:11.796 17:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.796 17:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.796 17:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63606' 00:08:11.796 17:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63606 00:08:11.796 [2024-10-25 17:49:30.032233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.796 17:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63606 00:08:12.055 [2024-10-25 17:49:30.318079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.995 17:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.995 00:08:12.995 real 0m10.201s 00:08:12.995 user 0m16.261s 00:08:12.995 sys 0m1.798s 00:08:12.995 17:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.995 17:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.995 ************************************ 00:08:12.995 END TEST raid_state_function_test 00:08:12.995 ************************************ 00:08:12.995 17:49:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:12.995 17:49:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.995 17:49:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.995 17:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.255 ************************************ 00:08:13.255 START TEST raid_state_function_test_sb 00:08:13.255 ************************************ 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64224 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64224' 00:08:13.255 Process raid pid: 64224 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64224 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64224 ']' 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.255 17:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.255 [2024-10-25 17:49:31.541960] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:13.255 [2024-10-25 17:49:31.542090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.515 [2024-10-25 17:49:31.710937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.515 [2024-10-25 17:49:31.824051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.775 [2024-10-25 17:49:32.024405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.775 [2024-10-25 17:49:32.024444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.035 [2024-10-25 17:49:32.348573] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.035 [2024-10-25 17:49:32.348625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.035 [2024-10-25 17:49:32.348636] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.035 [2024-10-25 17:49:32.348645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.035 [2024-10-25 17:49:32.348652] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.035 [2024-10-25 17:49:32.348661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.035 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.035 "name": "Existed_Raid", 00:08:14.035 "uuid": "9612468e-bc47-429f-8622-505228c1f650", 00:08:14.035 "strip_size_kb": 64, 00:08:14.035 "state": "configuring", 00:08:14.035 "raid_level": "raid0", 00:08:14.035 "superblock": true, 00:08:14.035 "num_base_bdevs": 3, 00:08:14.035 "num_base_bdevs_discovered": 0, 00:08:14.035 "num_base_bdevs_operational": 3, 00:08:14.035 "base_bdevs_list": [ 00:08:14.035 { 00:08:14.035 "name": "BaseBdev1", 00:08:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.035 "is_configured": false, 00:08:14.035 "data_offset": 0, 00:08:14.035 "data_size": 0 00:08:14.035 }, 00:08:14.035 { 00:08:14.035 "name": "BaseBdev2", 00:08:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.035 "is_configured": false, 00:08:14.035 "data_offset": 0, 00:08:14.035 "data_size": 0 00:08:14.035 }, 00:08:14.035 { 00:08:14.035 "name": "BaseBdev3", 00:08:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.036 "is_configured": false, 00:08:14.036 "data_offset": 0, 00:08:14.036 "data_size": 0 00:08:14.036 } 00:08:14.036 ] 00:08:14.036 }' 00:08:14.036 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.036 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 [2024-10-25 17:49:32.771801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.605 [2024-10-25 17:49:32.771914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 [2024-10-25 17:49:32.783779] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.605 [2024-10-25 17:49:32.783876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.605 [2024-10-25 17:49:32.783906] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.605 [2024-10-25 17:49:32.783929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.605 [2024-10-25 17:49:32.783947] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.605 [2024-10-25 17:49:32.783968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 [2024-10-25 17:49:32.828333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.605 BaseBdev1 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.605 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.605 [ 00:08:14.605 { 00:08:14.605 "name": "BaseBdev1", 00:08:14.605 "aliases": [ 00:08:14.605 "b43c7711-1bed-407d-b44e-39b6ad371c21" 00:08:14.605 ], 00:08:14.605 "product_name": "Malloc disk", 00:08:14.605 "block_size": 512, 00:08:14.605 "num_blocks": 65536, 00:08:14.605 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:14.605 "assigned_rate_limits": { 00:08:14.605 "rw_ios_per_sec": 0, 00:08:14.605 "rw_mbytes_per_sec": 0, 00:08:14.605 "r_mbytes_per_sec": 0, 00:08:14.605 "w_mbytes_per_sec": 0 00:08:14.605 }, 00:08:14.606 "claimed": true, 00:08:14.606 "claim_type": "exclusive_write", 00:08:14.606 "zoned": false, 00:08:14.606 "supported_io_types": { 00:08:14.606 "read": true, 00:08:14.606 "write": true, 00:08:14.606 "unmap": true, 00:08:14.606 "flush": true, 00:08:14.606 "reset": true, 00:08:14.606 "nvme_admin": false, 00:08:14.606 "nvme_io": false, 00:08:14.606 "nvme_io_md": false, 00:08:14.606 "write_zeroes": true, 00:08:14.606 "zcopy": true, 00:08:14.606 "get_zone_info": false, 00:08:14.606 "zone_management": false, 00:08:14.606 "zone_append": false, 00:08:14.606 "compare": false, 00:08:14.606 "compare_and_write": false, 00:08:14.606 "abort": true, 00:08:14.606 "seek_hole": false, 00:08:14.606 "seek_data": false, 00:08:14.606 "copy": true, 00:08:14.606 "nvme_iov_md": false 00:08:14.606 }, 00:08:14.606 "memory_domains": [ 00:08:14.606 { 00:08:14.606 "dma_device_id": "system", 00:08:14.606 "dma_device_type": 1 00:08:14.606 }, 00:08:14.606 { 00:08:14.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.606 "dma_device_type": 2 00:08:14.606 } 00:08:14.606 ], 00:08:14.606 "driver_specific": {} 00:08:14.606 } 00:08:14.606 ] 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.606 "name": "Existed_Raid", 00:08:14.606 "uuid": "e056127e-5cc9-4789-bc39-fbcc7810925a", 00:08:14.606 "strip_size_kb": 64, 00:08:14.606 "state": "configuring", 00:08:14.606 "raid_level": "raid0", 00:08:14.606 "superblock": true, 00:08:14.606 "num_base_bdevs": 3, 00:08:14.606 "num_base_bdevs_discovered": 1, 00:08:14.606 "num_base_bdevs_operational": 3, 00:08:14.606 "base_bdevs_list": [ 00:08:14.606 { 00:08:14.606 "name": "BaseBdev1", 00:08:14.606 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:14.606 "is_configured": true, 00:08:14.606 "data_offset": 2048, 00:08:14.606 "data_size": 63488 00:08:14.606 }, 00:08:14.606 { 00:08:14.606 "name": "BaseBdev2", 00:08:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.606 "is_configured": false, 00:08:14.606 "data_offset": 0, 00:08:14.606 "data_size": 0 00:08:14.606 }, 00:08:14.606 { 00:08:14.606 "name": "BaseBdev3", 00:08:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.606 "is_configured": false, 00:08:14.606 "data_offset": 0, 00:08:14.606 "data_size": 0 00:08:14.606 } 00:08:14.606 ] 00:08:14.606 }' 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.606 17:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.175 [2024-10-25 17:49:33.315522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.175 [2024-10-25 17:49:33.315616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.175 [2024-10-25 17:49:33.327555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.175 [2024-10-25 17:49:33.329289] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.175 [2024-10-25 17:49:33.329333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.175 [2024-10-25 17:49:33.329343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.175 [2024-10-25 17:49:33.329352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.175 "name": "Existed_Raid", 00:08:15.175 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:15.175 "strip_size_kb": 64, 00:08:15.175 "state": "configuring", 00:08:15.175 "raid_level": "raid0", 00:08:15.175 "superblock": true, 00:08:15.175 "num_base_bdevs": 3, 00:08:15.175 "num_base_bdevs_discovered": 1, 00:08:15.175 "num_base_bdevs_operational": 3, 00:08:15.175 "base_bdevs_list": [ 00:08:15.175 { 00:08:15.175 "name": "BaseBdev1", 00:08:15.175 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:15.175 "is_configured": true, 00:08:15.175 "data_offset": 2048, 00:08:15.175 "data_size": 63488 00:08:15.175 }, 00:08:15.175 { 00:08:15.175 "name": "BaseBdev2", 00:08:15.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.175 "is_configured": false, 00:08:15.175 "data_offset": 0, 00:08:15.175 "data_size": 0 00:08:15.175 }, 00:08:15.175 { 00:08:15.175 "name": "BaseBdev3", 00:08:15.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.175 "is_configured": false, 00:08:15.175 "data_offset": 0, 00:08:15.175 "data_size": 0 00:08:15.175 } 00:08:15.175 ] 00:08:15.175 }' 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.175 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.435 [2024-10-25 17:49:33.743130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.435 BaseBdev2 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.435 [ 00:08:15.435 { 00:08:15.435 "name": "BaseBdev2", 00:08:15.435 "aliases": [ 00:08:15.435 "ec08c3d3-8e96-483b-908f-8c25d386f3b7" 00:08:15.435 ], 00:08:15.435 "product_name": "Malloc disk", 00:08:15.435 "block_size": 512, 00:08:15.435 "num_blocks": 65536, 00:08:15.435 "uuid": "ec08c3d3-8e96-483b-908f-8c25d386f3b7", 00:08:15.435 "assigned_rate_limits": { 00:08:15.435 "rw_ios_per_sec": 0, 00:08:15.435 "rw_mbytes_per_sec": 0, 00:08:15.435 "r_mbytes_per_sec": 0, 00:08:15.435 "w_mbytes_per_sec": 0 00:08:15.435 }, 00:08:15.435 "claimed": true, 00:08:15.435 "claim_type": "exclusive_write", 00:08:15.435 "zoned": false, 00:08:15.435 "supported_io_types": { 00:08:15.435 "read": true, 00:08:15.435 "write": true, 00:08:15.435 "unmap": true, 00:08:15.435 "flush": true, 00:08:15.435 "reset": true, 00:08:15.435 "nvme_admin": false, 00:08:15.435 "nvme_io": false, 00:08:15.435 "nvme_io_md": false, 00:08:15.435 "write_zeroes": true, 00:08:15.435 "zcopy": true, 00:08:15.435 "get_zone_info": false, 00:08:15.435 "zone_management": false, 00:08:15.435 "zone_append": false, 00:08:15.435 "compare": false, 00:08:15.435 "compare_and_write": false, 00:08:15.435 "abort": true, 00:08:15.435 "seek_hole": false, 00:08:15.435 "seek_data": false, 00:08:15.435 "copy": true, 00:08:15.435 "nvme_iov_md": false 00:08:15.435 }, 00:08:15.435 "memory_domains": [ 00:08:15.435 { 00:08:15.435 "dma_device_id": "system", 00:08:15.435 "dma_device_type": 1 00:08:15.435 }, 00:08:15.435 { 00:08:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.435 "dma_device_type": 2 00:08:15.435 } 00:08:15.435 ], 00:08:15.435 "driver_specific": {} 00:08:15.435 } 00:08:15.435 ] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.435 "name": "Existed_Raid", 00:08:15.435 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:15.435 "strip_size_kb": 64, 00:08:15.435 "state": "configuring", 00:08:15.435 "raid_level": "raid0", 00:08:15.435 "superblock": true, 00:08:15.435 "num_base_bdevs": 3, 00:08:15.435 "num_base_bdevs_discovered": 2, 00:08:15.435 "num_base_bdevs_operational": 3, 00:08:15.435 "base_bdevs_list": [ 00:08:15.435 { 00:08:15.435 "name": "BaseBdev1", 00:08:15.435 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:15.435 "is_configured": true, 00:08:15.435 "data_offset": 2048, 00:08:15.435 "data_size": 63488 00:08:15.435 }, 00:08:15.435 { 00:08:15.435 "name": "BaseBdev2", 00:08:15.435 "uuid": "ec08c3d3-8e96-483b-908f-8c25d386f3b7", 00:08:15.435 "is_configured": true, 00:08:15.435 "data_offset": 2048, 00:08:15.435 "data_size": 63488 00:08:15.435 }, 00:08:15.435 { 00:08:15.435 "name": "BaseBdev3", 00:08:15.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.435 "is_configured": false, 00:08:15.435 "data_offset": 0, 00:08:15.435 "data_size": 0 00:08:15.435 } 00:08:15.435 ] 00:08:15.435 }' 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.435 17:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 [2024-10-25 17:49:34.278803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.003 [2024-10-25 17:49:34.279096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.003 [2024-10-25 17:49:34.279120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.003 [2024-10-25 17:49:34.279400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:16.003 [2024-10-25 17:49:34.279544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.003 [2024-10-25 17:49:34.279553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:16.003 BaseBdev3 00:08:16.003 [2024-10-25 17:49:34.279708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 [ 00:08:16.003 { 00:08:16.003 "name": "BaseBdev3", 00:08:16.003 "aliases": [ 00:08:16.003 "8c75cf0b-61e9-443b-8a60-51c5127a7659" 00:08:16.003 ], 00:08:16.003 "product_name": "Malloc disk", 00:08:16.003 "block_size": 512, 00:08:16.003 "num_blocks": 65536, 00:08:16.003 "uuid": "8c75cf0b-61e9-443b-8a60-51c5127a7659", 00:08:16.003 "assigned_rate_limits": { 00:08:16.003 "rw_ios_per_sec": 0, 00:08:16.003 "rw_mbytes_per_sec": 0, 00:08:16.003 "r_mbytes_per_sec": 0, 00:08:16.003 "w_mbytes_per_sec": 0 00:08:16.003 }, 00:08:16.003 "claimed": true, 00:08:16.003 "claim_type": "exclusive_write", 00:08:16.003 "zoned": false, 00:08:16.003 "supported_io_types": { 00:08:16.003 "read": true, 00:08:16.003 "write": true, 00:08:16.003 "unmap": true, 00:08:16.003 "flush": true, 00:08:16.003 "reset": true, 00:08:16.003 "nvme_admin": false, 00:08:16.003 "nvme_io": false, 00:08:16.003 "nvme_io_md": false, 00:08:16.003 "write_zeroes": true, 00:08:16.003 "zcopy": true, 00:08:16.003 "get_zone_info": false, 00:08:16.003 "zone_management": false, 00:08:16.003 "zone_append": false, 00:08:16.003 "compare": false, 00:08:16.003 "compare_and_write": false, 00:08:16.003 "abort": true, 00:08:16.003 "seek_hole": false, 00:08:16.003 "seek_data": false, 00:08:16.003 "copy": true, 00:08:16.003 "nvme_iov_md": false 00:08:16.003 }, 00:08:16.003 "memory_domains": [ 00:08:16.003 { 00:08:16.003 "dma_device_id": "system", 00:08:16.003 "dma_device_type": 1 00:08:16.003 }, 00:08:16.003 { 00:08:16.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.003 "dma_device_type": 2 00:08:16.003 } 00:08:16.003 ], 00:08:16.003 "driver_specific": {} 00:08:16.003 } 00:08:16.003 ] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.003 "name": "Existed_Raid", 00:08:16.003 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:16.003 "strip_size_kb": 64, 00:08:16.003 "state": "online", 00:08:16.003 "raid_level": "raid0", 00:08:16.003 "superblock": true, 00:08:16.003 "num_base_bdevs": 3, 00:08:16.003 "num_base_bdevs_discovered": 3, 00:08:16.003 "num_base_bdevs_operational": 3, 00:08:16.003 "base_bdevs_list": [ 00:08:16.003 { 00:08:16.003 "name": "BaseBdev1", 00:08:16.003 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:16.003 "is_configured": true, 00:08:16.003 "data_offset": 2048, 00:08:16.003 "data_size": 63488 00:08:16.003 }, 00:08:16.003 { 00:08:16.003 "name": "BaseBdev2", 00:08:16.003 "uuid": "ec08c3d3-8e96-483b-908f-8c25d386f3b7", 00:08:16.003 "is_configured": true, 00:08:16.003 "data_offset": 2048, 00:08:16.003 "data_size": 63488 00:08:16.003 }, 00:08:16.003 { 00:08:16.003 "name": "BaseBdev3", 00:08:16.003 "uuid": "8c75cf0b-61e9-443b-8a60-51c5127a7659", 00:08:16.003 "is_configured": true, 00:08:16.003 "data_offset": 2048, 00:08:16.003 "data_size": 63488 00:08:16.003 } 00:08:16.003 ] 00:08:16.003 }' 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.003 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 [2024-10-25 17:49:34.742351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.610 "name": "Existed_Raid", 00:08:16.610 "aliases": [ 00:08:16.610 "3dee0058-1a9b-4f5c-a1d3-75061825c159" 00:08:16.610 ], 00:08:16.610 "product_name": "Raid Volume", 00:08:16.610 "block_size": 512, 00:08:16.610 "num_blocks": 190464, 00:08:16.610 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:16.610 "assigned_rate_limits": { 00:08:16.610 "rw_ios_per_sec": 0, 00:08:16.610 "rw_mbytes_per_sec": 0, 00:08:16.610 "r_mbytes_per_sec": 0, 00:08:16.610 "w_mbytes_per_sec": 0 00:08:16.610 }, 00:08:16.610 "claimed": false, 00:08:16.610 "zoned": false, 00:08:16.610 "supported_io_types": { 00:08:16.610 "read": true, 00:08:16.610 "write": true, 00:08:16.610 "unmap": true, 00:08:16.610 "flush": true, 00:08:16.610 "reset": true, 00:08:16.610 "nvme_admin": false, 00:08:16.610 "nvme_io": false, 00:08:16.610 "nvme_io_md": false, 00:08:16.610 "write_zeroes": true, 00:08:16.610 "zcopy": false, 00:08:16.610 "get_zone_info": false, 00:08:16.610 "zone_management": false, 00:08:16.610 "zone_append": false, 00:08:16.610 "compare": false, 00:08:16.610 "compare_and_write": false, 00:08:16.610 "abort": false, 00:08:16.610 "seek_hole": false, 00:08:16.610 "seek_data": false, 00:08:16.610 "copy": false, 00:08:16.610 "nvme_iov_md": false 00:08:16.610 }, 00:08:16.610 "memory_domains": [ 00:08:16.610 { 00:08:16.610 "dma_device_id": "system", 00:08:16.610 "dma_device_type": 1 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.610 "dma_device_type": 2 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "dma_device_id": "system", 00:08:16.610 "dma_device_type": 1 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.610 "dma_device_type": 2 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "dma_device_id": "system", 00:08:16.610 "dma_device_type": 1 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.610 "dma_device_type": 2 00:08:16.610 } 00:08:16.610 ], 00:08:16.610 "driver_specific": { 00:08:16.610 "raid": { 00:08:16.610 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:16.610 "strip_size_kb": 64, 00:08:16.610 "state": "online", 00:08:16.610 "raid_level": "raid0", 00:08:16.610 "superblock": true, 00:08:16.610 "num_base_bdevs": 3, 00:08:16.610 "num_base_bdevs_discovered": 3, 00:08:16.610 "num_base_bdevs_operational": 3, 00:08:16.610 "base_bdevs_list": [ 00:08:16.610 { 00:08:16.610 "name": "BaseBdev1", 00:08:16.610 "uuid": "b43c7711-1bed-407d-b44e-39b6ad371c21", 00:08:16.610 "is_configured": true, 00:08:16.610 "data_offset": 2048, 00:08:16.610 "data_size": 63488 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "name": "BaseBdev2", 00:08:16.610 "uuid": "ec08c3d3-8e96-483b-908f-8c25d386f3b7", 00:08:16.610 "is_configured": true, 00:08:16.610 "data_offset": 2048, 00:08:16.610 "data_size": 63488 00:08:16.610 }, 00:08:16.610 { 00:08:16.610 "name": "BaseBdev3", 00:08:16.610 "uuid": "8c75cf0b-61e9-443b-8a60-51c5127a7659", 00:08:16.610 "is_configured": true, 00:08:16.610 "data_offset": 2048, 00:08:16.610 "data_size": 63488 00:08:16.610 } 00:08:16.610 ] 00:08:16.610 } 00:08:16.610 } 00:08:16.610 }' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.610 BaseBdev2 00:08:16.610 BaseBdev3' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 17:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.610 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.610 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.610 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.610 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.610 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.610 [2024-10-25 17:49:35.021604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.610 [2024-10-25 17:49:35.021674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.610 [2024-10-25 17:49:35.021746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.870 "name": "Existed_Raid", 00:08:16.870 "uuid": "3dee0058-1a9b-4f5c-a1d3-75061825c159", 00:08:16.870 "strip_size_kb": 64, 00:08:16.870 "state": "offline", 00:08:16.870 "raid_level": "raid0", 00:08:16.870 "superblock": true, 00:08:16.870 "num_base_bdevs": 3, 00:08:16.870 "num_base_bdevs_discovered": 2, 00:08:16.870 "num_base_bdevs_operational": 2, 00:08:16.870 "base_bdevs_list": [ 00:08:16.870 { 00:08:16.870 "name": null, 00:08:16.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.870 "is_configured": false, 00:08:16.870 "data_offset": 0, 00:08:16.870 "data_size": 63488 00:08:16.870 }, 00:08:16.870 { 00:08:16.870 "name": "BaseBdev2", 00:08:16.870 "uuid": "ec08c3d3-8e96-483b-908f-8c25d386f3b7", 00:08:16.870 "is_configured": true, 00:08:16.870 "data_offset": 2048, 00:08:16.870 "data_size": 63488 00:08:16.870 }, 00:08:16.870 { 00:08:16.870 "name": "BaseBdev3", 00:08:16.870 "uuid": "8c75cf0b-61e9-443b-8a60-51c5127a7659", 00:08:16.870 "is_configured": true, 00:08:16.870 "data_offset": 2048, 00:08:16.870 "data_size": 63488 00:08:16.870 } 00:08:16.870 ] 00:08:16.870 }' 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.870 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.129 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.129 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.129 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.129 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.129 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.130 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.130 [2024-10-25 17:49:35.553561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.408 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.408 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.408 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.409 [2024-10-25 17:49:35.697535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.409 [2024-10-25 17:49:35.697584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.409 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.710 BaseBdev2 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.710 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.710 [ 00:08:17.710 { 00:08:17.710 "name": "BaseBdev2", 00:08:17.710 "aliases": [ 00:08:17.710 "d3f2979a-cfd9-4003-b811-4a0c1b7eeede" 00:08:17.710 ], 00:08:17.710 "product_name": "Malloc disk", 00:08:17.710 "block_size": 512, 00:08:17.710 "num_blocks": 65536, 00:08:17.710 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:17.710 "assigned_rate_limits": { 00:08:17.710 "rw_ios_per_sec": 0, 00:08:17.710 "rw_mbytes_per_sec": 0, 00:08:17.710 "r_mbytes_per_sec": 0, 00:08:17.710 "w_mbytes_per_sec": 0 00:08:17.710 }, 00:08:17.710 "claimed": false, 00:08:17.710 "zoned": false, 00:08:17.710 "supported_io_types": { 00:08:17.710 "read": true, 00:08:17.710 "write": true, 00:08:17.710 "unmap": true, 00:08:17.710 "flush": true, 00:08:17.710 "reset": true, 00:08:17.710 "nvme_admin": false, 00:08:17.710 "nvme_io": false, 00:08:17.711 "nvme_io_md": false, 00:08:17.711 "write_zeroes": true, 00:08:17.711 "zcopy": true, 00:08:17.711 "get_zone_info": false, 00:08:17.711 "zone_management": false, 00:08:17.711 "zone_append": false, 00:08:17.711 "compare": false, 00:08:17.711 "compare_and_write": false, 00:08:17.711 "abort": true, 00:08:17.711 "seek_hole": false, 00:08:17.711 "seek_data": false, 00:08:17.711 "copy": true, 00:08:17.711 "nvme_iov_md": false 00:08:17.711 }, 00:08:17.711 "memory_domains": [ 00:08:17.711 { 00:08:17.711 "dma_device_id": "system", 00:08:17.711 "dma_device_type": 1 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.711 "dma_device_type": 2 00:08:17.711 } 00:08:17.711 ], 00:08:17.711 "driver_specific": {} 00:08:17.711 } 00:08:17.711 ] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 BaseBdev3 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 [ 00:08:17.711 { 00:08:17.711 "name": "BaseBdev3", 00:08:17.711 "aliases": [ 00:08:17.711 "15981ee8-75e6-4fb9-aa55-05498b9ba450" 00:08:17.711 ], 00:08:17.711 "product_name": "Malloc disk", 00:08:17.711 "block_size": 512, 00:08:17.711 "num_blocks": 65536, 00:08:17.711 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:17.711 "assigned_rate_limits": { 00:08:17.711 "rw_ios_per_sec": 0, 00:08:17.711 "rw_mbytes_per_sec": 0, 00:08:17.711 "r_mbytes_per_sec": 0, 00:08:17.711 "w_mbytes_per_sec": 0 00:08:17.711 }, 00:08:17.711 "claimed": false, 00:08:17.711 "zoned": false, 00:08:17.711 "supported_io_types": { 00:08:17.711 "read": true, 00:08:17.711 "write": true, 00:08:17.711 "unmap": true, 00:08:17.711 "flush": true, 00:08:17.711 "reset": true, 00:08:17.711 "nvme_admin": false, 00:08:17.711 "nvme_io": false, 00:08:17.711 "nvme_io_md": false, 00:08:17.711 "write_zeroes": true, 00:08:17.711 "zcopy": true, 00:08:17.711 "get_zone_info": false, 00:08:17.711 "zone_management": false, 00:08:17.711 "zone_append": false, 00:08:17.711 "compare": false, 00:08:17.711 "compare_and_write": false, 00:08:17.711 "abort": true, 00:08:17.711 "seek_hole": false, 00:08:17.711 "seek_data": false, 00:08:17.711 "copy": true, 00:08:17.711 "nvme_iov_md": false 00:08:17.711 }, 00:08:17.711 "memory_domains": [ 00:08:17.711 { 00:08:17.711 "dma_device_id": "system", 00:08:17.711 "dma_device_type": 1 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.711 "dma_device_type": 2 00:08:17.711 } 00:08:17.711 ], 00:08:17.711 "driver_specific": {} 00:08:17.711 } 00:08:17.711 ] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.711 17:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 [2024-10-25 17:49:36.006505] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.711 [2024-10-25 17:49:36.006590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.711 [2024-10-25 17:49:36.006632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.711 [2024-10-25 17:49:36.008359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.711 "name": "Existed_Raid", 00:08:17.711 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:17.711 "strip_size_kb": 64, 00:08:17.711 "state": "configuring", 00:08:17.711 "raid_level": "raid0", 00:08:17.711 "superblock": true, 00:08:17.711 "num_base_bdevs": 3, 00:08:17.711 "num_base_bdevs_discovered": 2, 00:08:17.711 "num_base_bdevs_operational": 3, 00:08:17.711 "base_bdevs_list": [ 00:08:17.711 { 00:08:17.711 "name": "BaseBdev1", 00:08:17.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.711 "is_configured": false, 00:08:17.711 "data_offset": 0, 00:08:17.711 "data_size": 0 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "name": "BaseBdev2", 00:08:17.711 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:17.711 "is_configured": true, 00:08:17.711 "data_offset": 2048, 00:08:17.711 "data_size": 63488 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "name": "BaseBdev3", 00:08:17.711 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:17.711 "is_configured": true, 00:08:17.711 "data_offset": 2048, 00:08:17.711 "data_size": 63488 00:08:17.711 } 00:08:17.711 ] 00:08:17.711 }' 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.711 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.281 [2024-10-25 17:49:36.421794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.281 "name": "Existed_Raid", 00:08:18.281 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:18.281 "strip_size_kb": 64, 00:08:18.281 "state": "configuring", 00:08:18.281 "raid_level": "raid0", 00:08:18.281 "superblock": true, 00:08:18.281 "num_base_bdevs": 3, 00:08:18.281 "num_base_bdevs_discovered": 1, 00:08:18.281 "num_base_bdevs_operational": 3, 00:08:18.281 "base_bdevs_list": [ 00:08:18.281 { 00:08:18.281 "name": "BaseBdev1", 00:08:18.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.281 "is_configured": false, 00:08:18.281 "data_offset": 0, 00:08:18.281 "data_size": 0 00:08:18.281 }, 00:08:18.281 { 00:08:18.281 "name": null, 00:08:18.281 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:18.281 "is_configured": false, 00:08:18.281 "data_offset": 0, 00:08:18.281 "data_size": 63488 00:08:18.281 }, 00:08:18.281 { 00:08:18.281 "name": "BaseBdev3", 00:08:18.281 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:18.281 "is_configured": true, 00:08:18.281 "data_offset": 2048, 00:08:18.281 "data_size": 63488 00:08:18.281 } 00:08:18.281 ] 00:08:18.281 }' 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.281 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 [2024-10-25 17:49:36.913319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.541 BaseBdev1 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 [ 00:08:18.541 { 00:08:18.541 "name": "BaseBdev1", 00:08:18.541 "aliases": [ 00:08:18.541 "ff5a817a-59a0-4697-9bf0-cba63791dd5b" 00:08:18.541 ], 00:08:18.541 "product_name": "Malloc disk", 00:08:18.541 "block_size": 512, 00:08:18.541 "num_blocks": 65536, 00:08:18.541 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:18.541 "assigned_rate_limits": { 00:08:18.541 "rw_ios_per_sec": 0, 00:08:18.541 "rw_mbytes_per_sec": 0, 00:08:18.541 "r_mbytes_per_sec": 0, 00:08:18.541 "w_mbytes_per_sec": 0 00:08:18.541 }, 00:08:18.541 "claimed": true, 00:08:18.541 "claim_type": "exclusive_write", 00:08:18.541 "zoned": false, 00:08:18.541 "supported_io_types": { 00:08:18.541 "read": true, 00:08:18.541 "write": true, 00:08:18.541 "unmap": true, 00:08:18.541 "flush": true, 00:08:18.541 "reset": true, 00:08:18.541 "nvme_admin": false, 00:08:18.541 "nvme_io": false, 00:08:18.541 "nvme_io_md": false, 00:08:18.541 "write_zeroes": true, 00:08:18.541 "zcopy": true, 00:08:18.541 "get_zone_info": false, 00:08:18.541 "zone_management": false, 00:08:18.541 "zone_append": false, 00:08:18.541 "compare": false, 00:08:18.541 "compare_and_write": false, 00:08:18.541 "abort": true, 00:08:18.541 "seek_hole": false, 00:08:18.541 "seek_data": false, 00:08:18.541 "copy": true, 00:08:18.541 "nvme_iov_md": false 00:08:18.541 }, 00:08:18.541 "memory_domains": [ 00:08:18.541 { 00:08:18.541 "dma_device_id": "system", 00:08:18.541 "dma_device_type": 1 00:08:18.541 }, 00:08:18.541 { 00:08:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.541 "dma_device_type": 2 00:08:18.541 } 00:08:18.541 ], 00:08:18.541 "driver_specific": {} 00:08:18.541 } 00:08:18.541 ] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.541 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.799 17:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.799 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.799 "name": "Existed_Raid", 00:08:18.799 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:18.799 "strip_size_kb": 64, 00:08:18.799 "state": "configuring", 00:08:18.799 "raid_level": "raid0", 00:08:18.799 "superblock": true, 00:08:18.799 "num_base_bdevs": 3, 00:08:18.799 "num_base_bdevs_discovered": 2, 00:08:18.799 "num_base_bdevs_operational": 3, 00:08:18.799 "base_bdevs_list": [ 00:08:18.799 { 00:08:18.799 "name": "BaseBdev1", 00:08:18.799 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:18.799 "is_configured": true, 00:08:18.799 "data_offset": 2048, 00:08:18.799 "data_size": 63488 00:08:18.799 }, 00:08:18.799 { 00:08:18.799 "name": null, 00:08:18.799 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:18.799 "is_configured": false, 00:08:18.799 "data_offset": 0, 00:08:18.799 "data_size": 63488 00:08:18.799 }, 00:08:18.799 { 00:08:18.799 "name": "BaseBdev3", 00:08:18.799 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:18.799 "is_configured": true, 00:08:18.799 "data_offset": 2048, 00:08:18.799 "data_size": 63488 00:08:18.799 } 00:08:18.799 ] 00:08:18.799 }' 00:08:18.799 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.799 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 [2024-10-25 17:49:37.432476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.057 "name": "Existed_Raid", 00:08:19.057 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:19.057 "strip_size_kb": 64, 00:08:19.057 "state": "configuring", 00:08:19.057 "raid_level": "raid0", 00:08:19.057 "superblock": true, 00:08:19.057 "num_base_bdevs": 3, 00:08:19.057 "num_base_bdevs_discovered": 1, 00:08:19.057 "num_base_bdevs_operational": 3, 00:08:19.057 "base_bdevs_list": [ 00:08:19.057 { 00:08:19.057 "name": "BaseBdev1", 00:08:19.057 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:19.057 "is_configured": true, 00:08:19.057 "data_offset": 2048, 00:08:19.057 "data_size": 63488 00:08:19.057 }, 00:08:19.057 { 00:08:19.057 "name": null, 00:08:19.057 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:19.057 "is_configured": false, 00:08:19.057 "data_offset": 0, 00:08:19.057 "data_size": 63488 00:08:19.057 }, 00:08:19.057 { 00:08:19.057 "name": null, 00:08:19.057 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:19.057 "is_configured": false, 00:08:19.057 "data_offset": 0, 00:08:19.057 "data_size": 63488 00:08:19.057 } 00:08:19.057 ] 00:08:19.057 }' 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.057 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 [2024-10-25 17:49:37.867740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.626 "name": "Existed_Raid", 00:08:19.626 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:19.626 "strip_size_kb": 64, 00:08:19.626 "state": "configuring", 00:08:19.626 "raid_level": "raid0", 00:08:19.626 "superblock": true, 00:08:19.626 "num_base_bdevs": 3, 00:08:19.626 "num_base_bdevs_discovered": 2, 00:08:19.626 "num_base_bdevs_operational": 3, 00:08:19.626 "base_bdevs_list": [ 00:08:19.626 { 00:08:19.626 "name": "BaseBdev1", 00:08:19.626 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:19.626 "is_configured": true, 00:08:19.626 "data_offset": 2048, 00:08:19.626 "data_size": 63488 00:08:19.626 }, 00:08:19.626 { 00:08:19.626 "name": null, 00:08:19.626 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:19.626 "is_configured": false, 00:08:19.626 "data_offset": 0, 00:08:19.626 "data_size": 63488 00:08:19.626 }, 00:08:19.626 { 00:08:19.626 "name": "BaseBdev3", 00:08:19.626 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:19.626 "is_configured": true, 00:08:19.626 "data_offset": 2048, 00:08:19.626 "data_size": 63488 00:08:19.626 } 00:08:19.626 ] 00:08:19.626 }' 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.626 17:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.195 [2024-10-25 17:49:38.378910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.195 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.195 "name": "Existed_Raid", 00:08:20.195 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:20.195 "strip_size_kb": 64, 00:08:20.195 "state": "configuring", 00:08:20.195 "raid_level": "raid0", 00:08:20.195 "superblock": true, 00:08:20.195 "num_base_bdevs": 3, 00:08:20.195 "num_base_bdevs_discovered": 1, 00:08:20.195 "num_base_bdevs_operational": 3, 00:08:20.195 "base_bdevs_list": [ 00:08:20.195 { 00:08:20.195 "name": null, 00:08:20.195 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:20.195 "is_configured": false, 00:08:20.195 "data_offset": 0, 00:08:20.195 "data_size": 63488 00:08:20.195 }, 00:08:20.195 { 00:08:20.195 "name": null, 00:08:20.195 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:20.195 "is_configured": false, 00:08:20.195 "data_offset": 0, 00:08:20.195 "data_size": 63488 00:08:20.195 }, 00:08:20.195 { 00:08:20.195 "name": "BaseBdev3", 00:08:20.195 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:20.195 "is_configured": true, 00:08:20.195 "data_offset": 2048, 00:08:20.195 "data_size": 63488 00:08:20.196 } 00:08:20.196 ] 00:08:20.196 }' 00:08:20.196 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.196 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.455 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.455 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.455 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.455 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.715 [2024-10-25 17:49:38.932209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.715 "name": "Existed_Raid", 00:08:20.715 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:20.715 "strip_size_kb": 64, 00:08:20.715 "state": "configuring", 00:08:20.715 "raid_level": "raid0", 00:08:20.715 "superblock": true, 00:08:20.715 "num_base_bdevs": 3, 00:08:20.715 "num_base_bdevs_discovered": 2, 00:08:20.715 "num_base_bdevs_operational": 3, 00:08:20.715 "base_bdevs_list": [ 00:08:20.715 { 00:08:20.715 "name": null, 00:08:20.715 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:20.715 "is_configured": false, 00:08:20.715 "data_offset": 0, 00:08:20.715 "data_size": 63488 00:08:20.715 }, 00:08:20.715 { 00:08:20.715 "name": "BaseBdev2", 00:08:20.715 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:20.715 "is_configured": true, 00:08:20.715 "data_offset": 2048, 00:08:20.715 "data_size": 63488 00:08:20.715 }, 00:08:20.715 { 00:08:20.715 "name": "BaseBdev3", 00:08:20.715 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:20.715 "is_configured": true, 00:08:20.715 "data_offset": 2048, 00:08:20.715 "data_size": 63488 00:08:20.715 } 00:08:20.715 ] 00:08:20.715 }' 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.715 17:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.975 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff5a817a-59a0-4697-9bf0-cba63791dd5b 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.235 [2024-10-25 17:49:39.458519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:21.235 [2024-10-25 17:49:39.458738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.235 [2024-10-25 17:49:39.458755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.235 [2024-10-25 17:49:39.459019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:21.235 NewBaseBdev 00:08:21.235 [2024-10-25 17:49:39.459172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.235 [2024-10-25 17:49:39.459228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:21.235 [2024-10-25 17:49:39.459386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.235 [ 00:08:21.235 { 00:08:21.235 "name": "NewBaseBdev", 00:08:21.235 "aliases": [ 00:08:21.235 "ff5a817a-59a0-4697-9bf0-cba63791dd5b" 00:08:21.235 ], 00:08:21.235 "product_name": "Malloc disk", 00:08:21.235 "block_size": 512, 00:08:21.235 "num_blocks": 65536, 00:08:21.235 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:21.235 "assigned_rate_limits": { 00:08:21.235 "rw_ios_per_sec": 0, 00:08:21.235 "rw_mbytes_per_sec": 0, 00:08:21.235 "r_mbytes_per_sec": 0, 00:08:21.235 "w_mbytes_per_sec": 0 00:08:21.235 }, 00:08:21.235 "claimed": true, 00:08:21.235 "claim_type": "exclusive_write", 00:08:21.235 "zoned": false, 00:08:21.235 "supported_io_types": { 00:08:21.235 "read": true, 00:08:21.235 "write": true, 00:08:21.235 "unmap": true, 00:08:21.235 "flush": true, 00:08:21.235 "reset": true, 00:08:21.235 "nvme_admin": false, 00:08:21.235 "nvme_io": false, 00:08:21.235 "nvme_io_md": false, 00:08:21.235 "write_zeroes": true, 00:08:21.235 "zcopy": true, 00:08:21.235 "get_zone_info": false, 00:08:21.235 "zone_management": false, 00:08:21.235 "zone_append": false, 00:08:21.235 "compare": false, 00:08:21.235 "compare_and_write": false, 00:08:21.235 "abort": true, 00:08:21.235 "seek_hole": false, 00:08:21.235 "seek_data": false, 00:08:21.235 "copy": true, 00:08:21.235 "nvme_iov_md": false 00:08:21.235 }, 00:08:21.235 "memory_domains": [ 00:08:21.235 { 00:08:21.235 "dma_device_id": "system", 00:08:21.235 "dma_device_type": 1 00:08:21.235 }, 00:08:21.235 { 00:08:21.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.235 "dma_device_type": 2 00:08:21.235 } 00:08:21.235 ], 00:08:21.235 "driver_specific": {} 00:08:21.235 } 00:08:21.235 ] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.235 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.236 "name": "Existed_Raid", 00:08:21.236 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:21.236 "strip_size_kb": 64, 00:08:21.236 "state": "online", 00:08:21.236 "raid_level": "raid0", 00:08:21.236 "superblock": true, 00:08:21.236 "num_base_bdevs": 3, 00:08:21.236 "num_base_bdevs_discovered": 3, 00:08:21.236 "num_base_bdevs_operational": 3, 00:08:21.236 "base_bdevs_list": [ 00:08:21.236 { 00:08:21.236 "name": "NewBaseBdev", 00:08:21.236 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:21.236 "is_configured": true, 00:08:21.236 "data_offset": 2048, 00:08:21.236 "data_size": 63488 00:08:21.236 }, 00:08:21.236 { 00:08:21.236 "name": "BaseBdev2", 00:08:21.236 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:21.236 "is_configured": true, 00:08:21.236 "data_offset": 2048, 00:08:21.236 "data_size": 63488 00:08:21.236 }, 00:08:21.236 { 00:08:21.236 "name": "BaseBdev3", 00:08:21.236 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:21.236 "is_configured": true, 00:08:21.236 "data_offset": 2048, 00:08:21.236 "data_size": 63488 00:08:21.236 } 00:08:21.236 ] 00:08:21.236 }' 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.236 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.496 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.496 [2024-10-25 17:49:39.918049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.757 17:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.757 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.757 "name": "Existed_Raid", 00:08:21.757 "aliases": [ 00:08:21.757 "2042db60-75c6-4e93-a292-260105e54ee9" 00:08:21.757 ], 00:08:21.757 "product_name": "Raid Volume", 00:08:21.757 "block_size": 512, 00:08:21.757 "num_blocks": 190464, 00:08:21.757 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:21.757 "assigned_rate_limits": { 00:08:21.757 "rw_ios_per_sec": 0, 00:08:21.757 "rw_mbytes_per_sec": 0, 00:08:21.757 "r_mbytes_per_sec": 0, 00:08:21.757 "w_mbytes_per_sec": 0 00:08:21.757 }, 00:08:21.757 "claimed": false, 00:08:21.757 "zoned": false, 00:08:21.757 "supported_io_types": { 00:08:21.757 "read": true, 00:08:21.757 "write": true, 00:08:21.757 "unmap": true, 00:08:21.757 "flush": true, 00:08:21.757 "reset": true, 00:08:21.757 "nvme_admin": false, 00:08:21.757 "nvme_io": false, 00:08:21.757 "nvme_io_md": false, 00:08:21.757 "write_zeroes": true, 00:08:21.757 "zcopy": false, 00:08:21.757 "get_zone_info": false, 00:08:21.757 "zone_management": false, 00:08:21.757 "zone_append": false, 00:08:21.757 "compare": false, 00:08:21.757 "compare_and_write": false, 00:08:21.757 "abort": false, 00:08:21.757 "seek_hole": false, 00:08:21.757 "seek_data": false, 00:08:21.757 "copy": false, 00:08:21.757 "nvme_iov_md": false 00:08:21.757 }, 00:08:21.757 "memory_domains": [ 00:08:21.757 { 00:08:21.757 "dma_device_id": "system", 00:08:21.757 "dma_device_type": 1 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.757 "dma_device_type": 2 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "dma_device_id": "system", 00:08:21.757 "dma_device_type": 1 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.757 "dma_device_type": 2 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "dma_device_id": "system", 00:08:21.757 "dma_device_type": 1 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.757 "dma_device_type": 2 00:08:21.757 } 00:08:21.757 ], 00:08:21.757 "driver_specific": { 00:08:21.757 "raid": { 00:08:21.757 "uuid": "2042db60-75c6-4e93-a292-260105e54ee9", 00:08:21.757 "strip_size_kb": 64, 00:08:21.757 "state": "online", 00:08:21.757 "raid_level": "raid0", 00:08:21.757 "superblock": true, 00:08:21.757 "num_base_bdevs": 3, 00:08:21.757 "num_base_bdevs_discovered": 3, 00:08:21.757 "num_base_bdevs_operational": 3, 00:08:21.757 "base_bdevs_list": [ 00:08:21.757 { 00:08:21.757 "name": "NewBaseBdev", 00:08:21.757 "uuid": "ff5a817a-59a0-4697-9bf0-cba63791dd5b", 00:08:21.757 "is_configured": true, 00:08:21.757 "data_offset": 2048, 00:08:21.757 "data_size": 63488 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "name": "BaseBdev2", 00:08:21.757 "uuid": "d3f2979a-cfd9-4003-b811-4a0c1b7eeede", 00:08:21.757 "is_configured": true, 00:08:21.757 "data_offset": 2048, 00:08:21.757 "data_size": 63488 00:08:21.757 }, 00:08:21.757 { 00:08:21.757 "name": "BaseBdev3", 00:08:21.757 "uuid": "15981ee8-75e6-4fb9-aa55-05498b9ba450", 00:08:21.757 "is_configured": true, 00:08:21.757 "data_offset": 2048, 00:08:21.757 "data_size": 63488 00:08:21.757 } 00:08:21.757 ] 00:08:21.757 } 00:08:21.757 } 00:08:21.757 }' 00:08:21.757 17:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.757 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:21.757 BaseBdev2 00:08:21.757 BaseBdev3' 00:08:21.757 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.757 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.758 [2024-10-25 17:49:40.165323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.758 [2024-10-25 17:49:40.165349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.758 [2024-10-25 17:49:40.165417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.758 [2024-10-25 17:49:40.165470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.758 [2024-10-25 17:49:40.165482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64224 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64224 ']' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64224 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.758 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64224 00:08:22.018 killing process with pid 64224 00:08:22.018 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.018 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.018 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64224' 00:08:22.018 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64224 00:08:22.018 [2024-10-25 17:49:40.213247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.018 17:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64224 00:08:22.277 [2024-10-25 17:49:40.499151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.218 ************************************ 00:08:23.218 END TEST raid_state_function_test_sb 00:08:23.218 ************************************ 00:08:23.218 17:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:23.218 00:08:23.219 real 0m10.122s 00:08:23.219 user 0m16.067s 00:08:23.219 sys 0m1.801s 00:08:23.219 17:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.219 17:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.219 17:49:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:23.219 17:49:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:23.219 17:49:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.219 17:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.219 ************************************ 00:08:23.219 START TEST raid_superblock_test 00:08:23.219 ************************************ 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64844 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64844 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64844 ']' 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.219 17:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.479 [2024-10-25 17:49:41.718474] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:23.479 [2024-10-25 17:49:41.718683] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64844 ] 00:08:23.479 [2024-10-25 17:49:41.893754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.739 [2024-10-25 17:49:42.003444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.999 [2024-10-25 17:49:42.196469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.999 [2024-10-25 17:49:42.196588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.259 malloc1 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.259 [2024-10-25 17:49:42.586345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.259 [2024-10-25 17:49:42.586451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.259 [2024-10-25 17:49:42.586492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:24.259 [2024-10-25 17:49:42.586522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.259 [2024-10-25 17:49:42.588594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.259 [2024-10-25 17:49:42.588666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.259 pt1 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.259 malloc2 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.259 [2024-10-25 17:49:42.641858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.259 [2024-10-25 17:49:42.641946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.259 [2024-10-25 17:49:42.641970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:24.259 [2024-10-25 17:49:42.641979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.259 [2024-10-25 17:49:42.644013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.259 [2024-10-25 17:49:42.644048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.259 pt2 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.259 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.520 malloc3 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.520 [2024-10-25 17:49:42.704861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:24.520 [2024-10-25 17:49:42.704948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.520 [2024-10-25 17:49:42.704983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:24.520 [2024-10-25 17:49:42.705011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.520 [2024-10-25 17:49:42.706999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.520 [2024-10-25 17:49:42.707065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:24.520 pt3 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.520 [2024-10-25 17:49:42.716898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.520 [2024-10-25 17:49:42.718655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.520 [2024-10-25 17:49:42.718750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:24.520 [2024-10-25 17:49:42.718934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:24.520 [2024-10-25 17:49:42.718979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.520 [2024-10-25 17:49:42.719232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.520 [2024-10-25 17:49:42.719421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:24.520 [2024-10-25 17:49:42.719461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:24.520 [2024-10-25 17:49:42.719624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.520 "name": "raid_bdev1", 00:08:24.520 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:24.520 "strip_size_kb": 64, 00:08:24.520 "state": "online", 00:08:24.520 "raid_level": "raid0", 00:08:24.520 "superblock": true, 00:08:24.520 "num_base_bdevs": 3, 00:08:24.520 "num_base_bdevs_discovered": 3, 00:08:24.520 "num_base_bdevs_operational": 3, 00:08:24.520 "base_bdevs_list": [ 00:08:24.520 { 00:08:24.520 "name": "pt1", 00:08:24.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.520 "is_configured": true, 00:08:24.520 "data_offset": 2048, 00:08:24.520 "data_size": 63488 00:08:24.520 }, 00:08:24.520 { 00:08:24.520 "name": "pt2", 00:08:24.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.520 "is_configured": true, 00:08:24.520 "data_offset": 2048, 00:08:24.520 "data_size": 63488 00:08:24.520 }, 00:08:24.520 { 00:08:24.520 "name": "pt3", 00:08:24.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.520 "is_configured": true, 00:08:24.520 "data_offset": 2048, 00:08:24.520 "data_size": 63488 00:08:24.520 } 00:08:24.520 ] 00:08:24.520 }' 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.520 17:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.780 [2024-10-25 17:49:43.136484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.780 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.780 "name": "raid_bdev1", 00:08:24.780 "aliases": [ 00:08:24.780 "e831e25e-1dc5-4926-a114-01be2970e591" 00:08:24.780 ], 00:08:24.780 "product_name": "Raid Volume", 00:08:24.780 "block_size": 512, 00:08:24.780 "num_blocks": 190464, 00:08:24.780 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:24.780 "assigned_rate_limits": { 00:08:24.780 "rw_ios_per_sec": 0, 00:08:24.780 "rw_mbytes_per_sec": 0, 00:08:24.780 "r_mbytes_per_sec": 0, 00:08:24.780 "w_mbytes_per_sec": 0 00:08:24.780 }, 00:08:24.780 "claimed": false, 00:08:24.780 "zoned": false, 00:08:24.780 "supported_io_types": { 00:08:24.780 "read": true, 00:08:24.780 "write": true, 00:08:24.780 "unmap": true, 00:08:24.780 "flush": true, 00:08:24.780 "reset": true, 00:08:24.780 "nvme_admin": false, 00:08:24.780 "nvme_io": false, 00:08:24.780 "nvme_io_md": false, 00:08:24.780 "write_zeroes": true, 00:08:24.780 "zcopy": false, 00:08:24.780 "get_zone_info": false, 00:08:24.780 "zone_management": false, 00:08:24.780 "zone_append": false, 00:08:24.780 "compare": false, 00:08:24.781 "compare_and_write": false, 00:08:24.781 "abort": false, 00:08:24.781 "seek_hole": false, 00:08:24.781 "seek_data": false, 00:08:24.781 "copy": false, 00:08:24.781 "nvme_iov_md": false 00:08:24.781 }, 00:08:24.781 "memory_domains": [ 00:08:24.781 { 00:08:24.781 "dma_device_id": "system", 00:08:24.781 "dma_device_type": 1 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.781 "dma_device_type": 2 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "dma_device_id": "system", 00:08:24.781 "dma_device_type": 1 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.781 "dma_device_type": 2 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "dma_device_id": "system", 00:08:24.781 "dma_device_type": 1 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.781 "dma_device_type": 2 00:08:24.781 } 00:08:24.781 ], 00:08:24.781 "driver_specific": { 00:08:24.781 "raid": { 00:08:24.781 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:24.781 "strip_size_kb": 64, 00:08:24.781 "state": "online", 00:08:24.781 "raid_level": "raid0", 00:08:24.781 "superblock": true, 00:08:24.781 "num_base_bdevs": 3, 00:08:24.781 "num_base_bdevs_discovered": 3, 00:08:24.781 "num_base_bdevs_operational": 3, 00:08:24.781 "base_bdevs_list": [ 00:08:24.781 { 00:08:24.781 "name": "pt1", 00:08:24.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.781 "is_configured": true, 00:08:24.781 "data_offset": 2048, 00:08:24.781 "data_size": 63488 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "name": "pt2", 00:08:24.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.781 "is_configured": true, 00:08:24.781 "data_offset": 2048, 00:08:24.781 "data_size": 63488 00:08:24.781 }, 00:08:24.781 { 00:08:24.781 "name": "pt3", 00:08:24.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.781 "is_configured": true, 00:08:24.781 "data_offset": 2048, 00:08:24.781 "data_size": 63488 00:08:24.781 } 00:08:24.781 ] 00:08:24.781 } 00:08:24.781 } 00:08:24.781 }' 00:08:24.781 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.041 pt2 00:08:25.041 pt3' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:25.041 [2024-10-25 17:49:43.435823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e831e25e-1dc5-4926-a114-01be2970e591 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e831e25e-1dc5-4926-a114-01be2970e591 ']' 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.042 [2024-10-25 17:49:43.467518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.042 [2024-10-25 17:49:43.467588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.042 [2024-10-25 17:49:43.467680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.042 [2024-10-25 17:49:43.467757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.042 [2024-10-25 17:49:43.467791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:25.042 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.042 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.302 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.302 [2024-10-25 17:49:43.611311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:25.302 [2024-10-25 17:49:43.613108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:25.302 [2024-10-25 17:49:43.613222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:25.302 [2024-10-25 17:49:43.613286] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:25.302 [2024-10-25 17:49:43.613330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:25.302 [2024-10-25 17:49:43.613348] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:25.302 [2024-10-25 17:49:43.613365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.302 [2024-10-25 17:49:43.613376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:25.302 request: 00:08:25.302 { 00:08:25.302 "name": "raid_bdev1", 00:08:25.302 "raid_level": "raid0", 00:08:25.302 "base_bdevs": [ 00:08:25.302 "malloc1", 00:08:25.302 "malloc2", 00:08:25.302 "malloc3" 00:08:25.302 ], 00:08:25.302 "strip_size_kb": 64, 00:08:25.302 "superblock": false, 00:08:25.302 "method": "bdev_raid_create", 00:08:25.302 "req_id": 1 00:08:25.302 } 00:08:25.302 Got JSON-RPC error response 00:08:25.302 response: 00:08:25.302 { 00:08:25.303 "code": -17, 00:08:25.303 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:25.303 } 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.303 [2024-10-25 17:49:43.675149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:25.303 [2024-10-25 17:49:43.675236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.303 [2024-10-25 17:49:43.675268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:25.303 [2024-10-25 17:49:43.675296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.303 [2024-10-25 17:49:43.677492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.303 [2024-10-25 17:49:43.677562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:25.303 [2024-10-25 17:49:43.677649] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:25.303 [2024-10-25 17:49:43.677723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:25.303 pt1 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.303 "name": "raid_bdev1", 00:08:25.303 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:25.303 "strip_size_kb": 64, 00:08:25.303 "state": "configuring", 00:08:25.303 "raid_level": "raid0", 00:08:25.303 "superblock": true, 00:08:25.303 "num_base_bdevs": 3, 00:08:25.303 "num_base_bdevs_discovered": 1, 00:08:25.303 "num_base_bdevs_operational": 3, 00:08:25.303 "base_bdevs_list": [ 00:08:25.303 { 00:08:25.303 "name": "pt1", 00:08:25.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.303 "is_configured": true, 00:08:25.303 "data_offset": 2048, 00:08:25.303 "data_size": 63488 00:08:25.303 }, 00:08:25.303 { 00:08:25.303 "name": null, 00:08:25.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.303 "is_configured": false, 00:08:25.303 "data_offset": 2048, 00:08:25.303 "data_size": 63488 00:08:25.303 }, 00:08:25.303 { 00:08:25.303 "name": null, 00:08:25.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.303 "is_configured": false, 00:08:25.303 "data_offset": 2048, 00:08:25.303 "data_size": 63488 00:08:25.303 } 00:08:25.303 ] 00:08:25.303 }' 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.303 17:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.873 [2024-10-25 17:49:44.098536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.873 [2024-10-25 17:49:44.098630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.873 [2024-10-25 17:49:44.098667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:25.873 [2024-10-25 17:49:44.098677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.873 [2024-10-25 17:49:44.099124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.873 [2024-10-25 17:49:44.099143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.873 [2024-10-25 17:49:44.099210] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.873 [2024-10-25 17:49:44.099228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.873 pt2 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.873 [2024-10-25 17:49:44.110537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.873 "name": "raid_bdev1", 00:08:25.873 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:25.873 "strip_size_kb": 64, 00:08:25.873 "state": "configuring", 00:08:25.873 "raid_level": "raid0", 00:08:25.873 "superblock": true, 00:08:25.873 "num_base_bdevs": 3, 00:08:25.873 "num_base_bdevs_discovered": 1, 00:08:25.873 "num_base_bdevs_operational": 3, 00:08:25.873 "base_bdevs_list": [ 00:08:25.873 { 00:08:25.873 "name": "pt1", 00:08:25.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.873 "is_configured": true, 00:08:25.873 "data_offset": 2048, 00:08:25.873 "data_size": 63488 00:08:25.873 }, 00:08:25.873 { 00:08:25.873 "name": null, 00:08:25.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.873 "is_configured": false, 00:08:25.873 "data_offset": 0, 00:08:25.873 "data_size": 63488 00:08:25.873 }, 00:08:25.873 { 00:08:25.873 "name": null, 00:08:25.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.873 "is_configured": false, 00:08:25.873 "data_offset": 2048, 00:08:25.873 "data_size": 63488 00:08:25.873 } 00:08:25.873 ] 00:08:25.873 }' 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.873 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.443 [2024-10-25 17:49:44.577732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.443 [2024-10-25 17:49:44.577848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.443 [2024-10-25 17:49:44.577883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:26.443 [2024-10-25 17:49:44.577914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.443 [2024-10-25 17:49:44.578354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.443 [2024-10-25 17:49:44.578413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.443 [2024-10-25 17:49:44.578510] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.443 [2024-10-25 17:49:44.578561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.443 pt2 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.443 [2024-10-25 17:49:44.589698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:26.443 [2024-10-25 17:49:44.589782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.443 [2024-10-25 17:49:44.589799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:26.443 [2024-10-25 17:49:44.589808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.443 [2024-10-25 17:49:44.590173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.443 [2024-10-25 17:49:44.590195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:26.443 [2024-10-25 17:49:44.590248] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:26.443 [2024-10-25 17:49:44.590267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:26.443 [2024-10-25 17:49:44.590368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:26.443 [2024-10-25 17:49:44.590379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.443 [2024-10-25 17:49:44.590607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:26.443 [2024-10-25 17:49:44.590770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:26.443 [2024-10-25 17:49:44.590778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:26.443 [2024-10-25 17:49:44.590932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.443 pt3 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.443 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.443 "name": "raid_bdev1", 00:08:26.443 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:26.443 "strip_size_kb": 64, 00:08:26.443 "state": "online", 00:08:26.443 "raid_level": "raid0", 00:08:26.443 "superblock": true, 00:08:26.443 "num_base_bdevs": 3, 00:08:26.443 "num_base_bdevs_discovered": 3, 00:08:26.443 "num_base_bdevs_operational": 3, 00:08:26.443 "base_bdevs_list": [ 00:08:26.443 { 00:08:26.443 "name": "pt1", 00:08:26.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.443 "is_configured": true, 00:08:26.443 "data_offset": 2048, 00:08:26.443 "data_size": 63488 00:08:26.443 }, 00:08:26.443 { 00:08:26.443 "name": "pt2", 00:08:26.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.443 "is_configured": true, 00:08:26.443 "data_offset": 2048, 00:08:26.443 "data_size": 63488 00:08:26.443 }, 00:08:26.443 { 00:08:26.443 "name": "pt3", 00:08:26.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.444 "is_configured": true, 00:08:26.444 "data_offset": 2048, 00:08:26.444 "data_size": 63488 00:08:26.444 } 00:08:26.444 ] 00:08:26.444 }' 00:08:26.444 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.444 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 [2024-10-25 17:49:44.969322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.704 17:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.704 "name": "raid_bdev1", 00:08:26.704 "aliases": [ 00:08:26.704 "e831e25e-1dc5-4926-a114-01be2970e591" 00:08:26.704 ], 00:08:26.704 "product_name": "Raid Volume", 00:08:26.704 "block_size": 512, 00:08:26.704 "num_blocks": 190464, 00:08:26.704 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:26.704 "assigned_rate_limits": { 00:08:26.704 "rw_ios_per_sec": 0, 00:08:26.704 "rw_mbytes_per_sec": 0, 00:08:26.704 "r_mbytes_per_sec": 0, 00:08:26.704 "w_mbytes_per_sec": 0 00:08:26.704 }, 00:08:26.704 "claimed": false, 00:08:26.704 "zoned": false, 00:08:26.704 "supported_io_types": { 00:08:26.704 "read": true, 00:08:26.704 "write": true, 00:08:26.704 "unmap": true, 00:08:26.704 "flush": true, 00:08:26.704 "reset": true, 00:08:26.704 "nvme_admin": false, 00:08:26.704 "nvme_io": false, 00:08:26.704 "nvme_io_md": false, 00:08:26.704 "write_zeroes": true, 00:08:26.704 "zcopy": false, 00:08:26.704 "get_zone_info": false, 00:08:26.704 "zone_management": false, 00:08:26.704 "zone_append": false, 00:08:26.704 "compare": false, 00:08:26.704 "compare_and_write": false, 00:08:26.704 "abort": false, 00:08:26.704 "seek_hole": false, 00:08:26.704 "seek_data": false, 00:08:26.704 "copy": false, 00:08:26.704 "nvme_iov_md": false 00:08:26.704 }, 00:08:26.704 "memory_domains": [ 00:08:26.704 { 00:08:26.704 "dma_device_id": "system", 00:08:26.704 "dma_device_type": 1 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.704 "dma_device_type": 2 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "dma_device_id": "system", 00:08:26.704 "dma_device_type": 1 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.704 "dma_device_type": 2 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "dma_device_id": "system", 00:08:26.704 "dma_device_type": 1 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.704 "dma_device_type": 2 00:08:26.704 } 00:08:26.704 ], 00:08:26.704 "driver_specific": { 00:08:26.704 "raid": { 00:08:26.704 "uuid": "e831e25e-1dc5-4926-a114-01be2970e591", 00:08:26.704 "strip_size_kb": 64, 00:08:26.704 "state": "online", 00:08:26.704 "raid_level": "raid0", 00:08:26.704 "superblock": true, 00:08:26.704 "num_base_bdevs": 3, 00:08:26.704 "num_base_bdevs_discovered": 3, 00:08:26.704 "num_base_bdevs_operational": 3, 00:08:26.704 "base_bdevs_list": [ 00:08:26.704 { 00:08:26.704 "name": "pt1", 00:08:26.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.704 "is_configured": true, 00:08:26.704 "data_offset": 2048, 00:08:26.704 "data_size": 63488 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "name": "pt2", 00:08:26.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.704 "is_configured": true, 00:08:26.704 "data_offset": 2048, 00:08:26.704 "data_size": 63488 00:08:26.704 }, 00:08:26.704 { 00:08:26.704 "name": "pt3", 00:08:26.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.704 "is_configured": true, 00:08:26.704 "data_offset": 2048, 00:08:26.704 "data_size": 63488 00:08:26.704 } 00:08:26.704 ] 00:08:26.704 } 00:08:26.704 } 00:08:26.704 }' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:26.704 pt2 00:08:26.704 pt3' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.704 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.964 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:26.965 [2024-10-25 17:49:45.244805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e831e25e-1dc5-4926-a114-01be2970e591 '!=' e831e25e-1dc5-4926-a114-01be2970e591 ']' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64844 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64844 ']' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64844 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64844 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64844' 00:08:26.965 killing process with pid 64844 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64844 00:08:26.965 [2024-10-25 17:49:45.331247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.965 [2024-10-25 17:49:45.331374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.965 17:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64844 00:08:26.965 [2024-10-25 17:49:45.331456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.965 [2024-10-25 17:49:45.331477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.225 [2024-10-25 17:49:45.611931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.609 17:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:28.609 00:08:28.609 real 0m5.023s 00:08:28.609 user 0m7.215s 00:08:28.609 sys 0m0.856s 00:08:28.609 17:49:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.609 17:49:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 ************************************ 00:08:28.609 END TEST raid_superblock_test 00:08:28.609 ************************************ 00:08:28.609 17:49:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:28.609 17:49:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.609 17:49:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.609 17:49:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 ************************************ 00:08:28.609 START TEST raid_read_error_test 00:08:28.609 ************************************ 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HqzoKY4ZIi 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65097 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65097 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:28.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65097 ']' 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.609 17:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 [2024-10-25 17:49:46.833957] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:28.609 [2024-10-25 17:49:46.834148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65097 ] 00:08:28.609 [2024-10-25 17:49:47.009203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.870 [2024-10-25 17:49:47.123733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.130 [2024-10-25 17:49:47.317790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.130 [2024-10-25 17:49:47.317860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.390 BaseBdev1_malloc 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.390 true 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.390 [2024-10-25 17:49:47.707419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.390 [2024-10-25 17:49:47.707472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.390 [2024-10-25 17:49:47.707491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:29.390 [2024-10-25 17:49:47.707501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.390 [2024-10-25 17:49:47.709536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.390 [2024-10-25 17:49:47.709576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.390 BaseBdev1 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.390 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.391 BaseBdev2_malloc 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.391 true 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.391 [2024-10-25 17:49:47.769089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.391 [2024-10-25 17:49:47.769139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.391 [2024-10-25 17:49:47.769152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.391 [2024-10-25 17:49:47.769163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.391 [2024-10-25 17:49:47.771122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.391 [2024-10-25 17:49:47.771160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.391 BaseBdev2 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.391 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 BaseBdev3_malloc 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 true 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 [2024-10-25 17:49:47.848098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:29.650 [2024-10-25 17:49:47.848151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.650 [2024-10-25 17:49:47.848169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.650 [2024-10-25 17:49:47.848181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.650 [2024-10-25 17:49:47.850218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.650 [2024-10-25 17:49:47.850258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:29.650 BaseBdev3 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 [2024-10-25 17:49:47.860154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.650 [2024-10-25 17:49:47.861887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.650 [2024-10-25 17:49:47.861961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.650 [2024-10-25 17:49:47.862148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.650 [2024-10-25 17:49:47.862162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.650 [2024-10-25 17:49:47.862416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.650 [2024-10-25 17:49:47.862582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.650 [2024-10-25 17:49:47.862596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:29.650 [2024-10-25 17:49:47.862739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.650 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.651 "name": "raid_bdev1", 00:08:29.651 "uuid": "695ad05a-6422-4b62-863a-79504c58b748", 00:08:29.651 "strip_size_kb": 64, 00:08:29.651 "state": "online", 00:08:29.651 "raid_level": "raid0", 00:08:29.651 "superblock": true, 00:08:29.651 "num_base_bdevs": 3, 00:08:29.651 "num_base_bdevs_discovered": 3, 00:08:29.651 "num_base_bdevs_operational": 3, 00:08:29.651 "base_bdevs_list": [ 00:08:29.651 { 00:08:29.651 "name": "BaseBdev1", 00:08:29.651 "uuid": "742b2600-42cc-54fd-a655-e1875a5da0e7", 00:08:29.651 "is_configured": true, 00:08:29.651 "data_offset": 2048, 00:08:29.651 "data_size": 63488 00:08:29.651 }, 00:08:29.651 { 00:08:29.651 "name": "BaseBdev2", 00:08:29.651 "uuid": "5ed18223-35be-5fa9-89f6-d6bdc3f399c0", 00:08:29.651 "is_configured": true, 00:08:29.651 "data_offset": 2048, 00:08:29.651 "data_size": 63488 00:08:29.651 }, 00:08:29.651 { 00:08:29.651 "name": "BaseBdev3", 00:08:29.651 "uuid": "9a830e54-29b7-56e1-b73e-35ad67408054", 00:08:29.651 "is_configured": true, 00:08:29.651 "data_offset": 2048, 00:08:29.651 "data_size": 63488 00:08:29.651 } 00:08:29.651 ] 00:08:29.651 }' 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.651 17:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.911 17:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:29.911 17:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.171 [2024-10-25 17:49:48.436397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:31.111 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:31.111 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.112 "name": "raid_bdev1", 00:08:31.112 "uuid": "695ad05a-6422-4b62-863a-79504c58b748", 00:08:31.112 "strip_size_kb": 64, 00:08:31.112 "state": "online", 00:08:31.112 "raid_level": "raid0", 00:08:31.112 "superblock": true, 00:08:31.112 "num_base_bdevs": 3, 00:08:31.112 "num_base_bdevs_discovered": 3, 00:08:31.112 "num_base_bdevs_operational": 3, 00:08:31.112 "base_bdevs_list": [ 00:08:31.112 { 00:08:31.112 "name": "BaseBdev1", 00:08:31.112 "uuid": "742b2600-42cc-54fd-a655-e1875a5da0e7", 00:08:31.112 "is_configured": true, 00:08:31.112 "data_offset": 2048, 00:08:31.112 "data_size": 63488 00:08:31.112 }, 00:08:31.112 { 00:08:31.112 "name": "BaseBdev2", 00:08:31.112 "uuid": "5ed18223-35be-5fa9-89f6-d6bdc3f399c0", 00:08:31.112 "is_configured": true, 00:08:31.112 "data_offset": 2048, 00:08:31.112 "data_size": 63488 00:08:31.112 }, 00:08:31.112 { 00:08:31.112 "name": "BaseBdev3", 00:08:31.112 "uuid": "9a830e54-29b7-56e1-b73e-35ad67408054", 00:08:31.112 "is_configured": true, 00:08:31.112 "data_offset": 2048, 00:08:31.112 "data_size": 63488 00:08:31.112 } 00:08:31.112 ] 00:08:31.112 }' 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.112 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.380 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.380 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 [2024-10-25 17:49:49.792098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.380 [2024-10-25 17:49:49.792130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.380 [2024-10-25 17:49:49.794717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.380 [2024-10-25 17:49:49.794761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.380 [2024-10-25 17:49:49.794798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.380 [2024-10-25 17:49:49.794807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:31.380 { 00:08:31.380 "results": [ 00:08:31.380 { 00:08:31.380 "job": "raid_bdev1", 00:08:31.380 "core_mask": "0x1", 00:08:31.380 "workload": "randrw", 00:08:31.380 "percentage": 50, 00:08:31.380 "status": "finished", 00:08:31.380 "queue_depth": 1, 00:08:31.381 "io_size": 131072, 00:08:31.381 "runtime": 1.356554, 00:08:31.381 "iops": 16648.802775267333, 00:08:31.381 "mibps": 2081.1003469084167, 00:08:31.381 "io_failed": 1, 00:08:31.381 "io_timeout": 0, 00:08:31.381 "avg_latency_us": 83.53815282257395, 00:08:31.381 "min_latency_us": 24.593886462882097, 00:08:31.381 "max_latency_us": 1359.3711790393013 00:08:31.381 } 00:08:31.381 ], 00:08:31.381 "core_count": 1 00:08:31.381 } 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65097 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65097 ']' 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65097 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.381 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65097 00:08:31.672 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.672 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.672 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65097' 00:08:31.672 killing process with pid 65097 00:08:31.672 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65097 00:08:31.672 [2024-10-25 17:49:49.843444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.672 17:49:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65097 00:08:31.672 [2024-10-25 17:49:50.066040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HqzoKY4ZIi 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:33.062 00:08:33.062 real 0m4.466s 00:08:33.062 user 0m5.288s 00:08:33.062 sys 0m0.591s 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.062 ************************************ 00:08:33.062 END TEST raid_read_error_test 00:08:33.062 ************************************ 00:08:33.062 17:49:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.062 17:49:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:33.062 17:49:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:33.062 17:49:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.062 17:49:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.062 ************************************ 00:08:33.062 START TEST raid_write_error_test 00:08:33.062 ************************************ 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pRvLNXH8b0 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65243 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:33.062 17:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65243 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65243 ']' 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.063 17:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.063 [2024-10-25 17:49:51.374285] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:33.063 [2024-10-25 17:49:51.374506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65243 ] 00:08:33.323 [2024-10-25 17:49:51.538423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.323 [2024-10-25 17:49:51.667192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.583 [2024-10-25 17:49:51.858350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.583 [2024-10-25 17:49:51.858491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.843 BaseBdev1_malloc 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.843 true 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.843 [2024-10-25 17:49:52.251064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:33.843 [2024-10-25 17:49:52.251120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.843 [2024-10-25 17:49:52.251139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:33.843 [2024-10-25 17:49:52.251149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.843 [2024-10-25 17:49:52.253187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.843 [2024-10-25 17:49:52.253229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:33.843 BaseBdev1 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.843 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 BaseBdev2_malloc 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 true 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 [2024-10-25 17:49:52.313555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:34.104 [2024-10-25 17:49:52.313613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.104 [2024-10-25 17:49:52.313629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:34.104 [2024-10-25 17:49:52.313639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.104 [2024-10-25 17:49:52.315681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.104 [2024-10-25 17:49:52.315722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:34.104 BaseBdev2 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 BaseBdev3_malloc 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 true 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 [2024-10-25 17:49:52.390136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:34.104 [2024-10-25 17:49:52.390190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.104 [2024-10-25 17:49:52.390206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:34.104 [2024-10-25 17:49:52.390218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.104 [2024-10-25 17:49:52.392297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.104 [2024-10-25 17:49:52.392337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:34.104 BaseBdev3 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 [2024-10-25 17:49:52.402185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.104 [2024-10-25 17:49:52.404004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.104 [2024-10-25 17:49:52.404093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.104 [2024-10-25 17:49:52.404278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:34.104 [2024-10-25 17:49:52.404291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.104 [2024-10-25 17:49:52.404522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:34.104 [2024-10-25 17:49:52.404671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:34.104 [2024-10-25 17:49:52.404684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:34.104 [2024-10-25 17:49:52.404847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.104 "name": "raid_bdev1", 00:08:34.104 "uuid": "679a26fd-0a4f-4fe0-a0fd-473c116266e6", 00:08:34.104 "strip_size_kb": 64, 00:08:34.104 "state": "online", 00:08:34.104 "raid_level": "raid0", 00:08:34.104 "superblock": true, 00:08:34.104 "num_base_bdevs": 3, 00:08:34.104 "num_base_bdevs_discovered": 3, 00:08:34.104 "num_base_bdevs_operational": 3, 00:08:34.104 "base_bdevs_list": [ 00:08:34.104 { 00:08:34.104 "name": "BaseBdev1", 00:08:34.104 "uuid": "18ff2675-b582-5ac1-943f-5c4669e4aff3", 00:08:34.104 "is_configured": true, 00:08:34.104 "data_offset": 2048, 00:08:34.104 "data_size": 63488 00:08:34.104 }, 00:08:34.104 { 00:08:34.104 "name": "BaseBdev2", 00:08:34.104 "uuid": "783ff6f4-6b21-514c-bb03-77c8001cfa36", 00:08:34.104 "is_configured": true, 00:08:34.104 "data_offset": 2048, 00:08:34.104 "data_size": 63488 00:08:34.104 }, 00:08:34.104 { 00:08:34.104 "name": "BaseBdev3", 00:08:34.104 "uuid": "1d670dd0-4ec7-53e7-992c-c1b35acfe01f", 00:08:34.104 "is_configured": true, 00:08:34.104 "data_offset": 2048, 00:08:34.104 "data_size": 63488 00:08:34.104 } 00:08:34.104 ] 00:08:34.104 }' 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.104 17:49:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.675 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:34.675 17:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:34.675 [2024-10-25 17:49:52.954519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.616 "name": "raid_bdev1", 00:08:35.616 "uuid": "679a26fd-0a4f-4fe0-a0fd-473c116266e6", 00:08:35.616 "strip_size_kb": 64, 00:08:35.616 "state": "online", 00:08:35.616 "raid_level": "raid0", 00:08:35.616 "superblock": true, 00:08:35.616 "num_base_bdevs": 3, 00:08:35.616 "num_base_bdevs_discovered": 3, 00:08:35.616 "num_base_bdevs_operational": 3, 00:08:35.616 "base_bdevs_list": [ 00:08:35.616 { 00:08:35.616 "name": "BaseBdev1", 00:08:35.616 "uuid": "18ff2675-b582-5ac1-943f-5c4669e4aff3", 00:08:35.616 "is_configured": true, 00:08:35.616 "data_offset": 2048, 00:08:35.616 "data_size": 63488 00:08:35.616 }, 00:08:35.616 { 00:08:35.616 "name": "BaseBdev2", 00:08:35.616 "uuid": "783ff6f4-6b21-514c-bb03-77c8001cfa36", 00:08:35.616 "is_configured": true, 00:08:35.616 "data_offset": 2048, 00:08:35.616 "data_size": 63488 00:08:35.616 }, 00:08:35.616 { 00:08:35.616 "name": "BaseBdev3", 00:08:35.616 "uuid": "1d670dd0-4ec7-53e7-992c-c1b35acfe01f", 00:08:35.616 "is_configured": true, 00:08:35.616 "data_offset": 2048, 00:08:35.616 "data_size": 63488 00:08:35.616 } 00:08:35.616 ] 00:08:35.616 }' 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.616 17:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.875 [2024-10-25 17:49:54.292583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.875 [2024-10-25 17:49:54.292616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.875 { 00:08:35.875 "results": [ 00:08:35.875 { 00:08:35.875 "job": "raid_bdev1", 00:08:35.875 "core_mask": "0x1", 00:08:35.875 "workload": "randrw", 00:08:35.875 "percentage": 50, 00:08:35.875 "status": "finished", 00:08:35.875 "queue_depth": 1, 00:08:35.875 "io_size": 131072, 00:08:35.875 "runtime": 1.33876, 00:08:35.875 "iops": 16882.040096806, 00:08:35.875 "mibps": 2110.25501210075, 00:08:35.875 "io_failed": 1, 00:08:35.875 "io_timeout": 0, 00:08:35.875 "avg_latency_us": 82.35818154207477, 00:08:35.875 "min_latency_us": 24.370305676855896, 00:08:35.875 "max_latency_us": 1323.598253275109 00:08:35.875 } 00:08:35.875 ], 00:08:35.875 "core_count": 1 00:08:35.875 } 00:08:35.875 [2024-10-25 17:49:54.295301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.875 [2024-10-25 17:49:54.295345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.875 [2024-10-25 17:49:54.295384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.875 [2024-10-25 17:49:54.295393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65243 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65243 ']' 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65243 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.875 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65243 00:08:36.135 killing process with pid 65243 00:08:36.135 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.135 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.135 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65243' 00:08:36.135 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65243 00:08:36.135 [2024-10-25 17:49:54.345494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.135 17:49:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65243 00:08:36.135 [2024-10-25 17:49:54.565221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pRvLNXH8b0 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:37.518 ************************************ 00:08:37.518 END TEST raid_write_error_test 00:08:37.518 ************************************ 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:37.518 00:08:37.518 real 0m4.400s 00:08:37.518 user 0m5.205s 00:08:37.518 sys 0m0.564s 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.518 17:49:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.518 17:49:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:37.518 17:49:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:37.518 17:49:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.518 17:49:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.518 17:49:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.518 ************************************ 00:08:37.518 START TEST raid_state_function_test 00:08:37.518 ************************************ 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65381 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65381' 00:08:37.518 Process raid pid: 65381 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65381 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65381 ']' 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.518 17:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.518 [2024-10-25 17:49:55.843546] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:37.518 [2024-10-25 17:49:55.843746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.778 [2024-10-25 17:49:56.025896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.778 [2024-10-25 17:49:56.136244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.038 [2024-10-25 17:49:56.318732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.038 [2024-10-25 17:49:56.318837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.298 [2024-10-25 17:49:56.667902] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.298 [2024-10-25 17:49:56.668007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.298 [2024-10-25 17:49:56.668021] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.298 [2024-10-25 17:49:56.668031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.298 [2024-10-25 17:49:56.668037] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.298 [2024-10-25 17:49:56.668045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.298 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.298 "name": "Existed_Raid", 00:08:38.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.298 "strip_size_kb": 64, 00:08:38.298 "state": "configuring", 00:08:38.298 "raid_level": "concat", 00:08:38.298 "superblock": false, 00:08:38.298 "num_base_bdevs": 3, 00:08:38.298 "num_base_bdevs_discovered": 0, 00:08:38.298 "num_base_bdevs_operational": 3, 00:08:38.298 "base_bdevs_list": [ 00:08:38.298 { 00:08:38.298 "name": "BaseBdev1", 00:08:38.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.298 "is_configured": false, 00:08:38.298 "data_offset": 0, 00:08:38.298 "data_size": 0 00:08:38.298 }, 00:08:38.298 { 00:08:38.298 "name": "BaseBdev2", 00:08:38.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.298 "is_configured": false, 00:08:38.299 "data_offset": 0, 00:08:38.299 "data_size": 0 00:08:38.299 }, 00:08:38.299 { 00:08:38.299 "name": "BaseBdev3", 00:08:38.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.299 "is_configured": false, 00:08:38.299 "data_offset": 0, 00:08:38.299 "data_size": 0 00:08:38.299 } 00:08:38.299 ] 00:08:38.299 }' 00:08:38.299 17:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.299 17:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [2024-10-25 17:49:57.123021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.868 [2024-10-25 17:49:57.123093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [2024-10-25 17:49:57.131018] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.868 [2024-10-25 17:49:57.131093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.868 [2024-10-25 17:49:57.131119] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.868 [2024-10-25 17:49:57.131140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.868 [2024-10-25 17:49:57.131158] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.868 [2024-10-25 17:49:57.131177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [2024-10-25 17:49:57.172740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.868 BaseBdev1 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [ 00:08:38.868 { 00:08:38.868 "name": "BaseBdev1", 00:08:38.868 "aliases": [ 00:08:38.868 "4828878b-af48-44ad-9d4b-5fd8ef652c81" 00:08:38.868 ], 00:08:38.868 "product_name": "Malloc disk", 00:08:38.868 "block_size": 512, 00:08:38.868 "num_blocks": 65536, 00:08:38.868 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:38.868 "assigned_rate_limits": { 00:08:38.868 "rw_ios_per_sec": 0, 00:08:38.868 "rw_mbytes_per_sec": 0, 00:08:38.868 "r_mbytes_per_sec": 0, 00:08:38.868 "w_mbytes_per_sec": 0 00:08:38.868 }, 00:08:38.868 "claimed": true, 00:08:38.868 "claim_type": "exclusive_write", 00:08:38.868 "zoned": false, 00:08:38.868 "supported_io_types": { 00:08:38.868 "read": true, 00:08:38.868 "write": true, 00:08:38.868 "unmap": true, 00:08:38.868 "flush": true, 00:08:38.868 "reset": true, 00:08:38.868 "nvme_admin": false, 00:08:38.868 "nvme_io": false, 00:08:38.868 "nvme_io_md": false, 00:08:38.868 "write_zeroes": true, 00:08:38.868 "zcopy": true, 00:08:38.868 "get_zone_info": false, 00:08:38.868 "zone_management": false, 00:08:38.868 "zone_append": false, 00:08:38.868 "compare": false, 00:08:38.868 "compare_and_write": false, 00:08:38.868 "abort": true, 00:08:38.868 "seek_hole": false, 00:08:38.868 "seek_data": false, 00:08:38.868 "copy": true, 00:08:38.868 "nvme_iov_md": false 00:08:38.868 }, 00:08:38.868 "memory_domains": [ 00:08:38.868 { 00:08:38.868 "dma_device_id": "system", 00:08:38.868 "dma_device_type": 1 00:08:38.868 }, 00:08:38.868 { 00:08:38.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.868 "dma_device_type": 2 00:08:38.868 } 00:08:38.868 ], 00:08:38.868 "driver_specific": {} 00:08:38.868 } 00:08:38.868 ] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.868 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.868 "name": "Existed_Raid", 00:08:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.868 "strip_size_kb": 64, 00:08:38.868 "state": "configuring", 00:08:38.868 "raid_level": "concat", 00:08:38.868 "superblock": false, 00:08:38.868 "num_base_bdevs": 3, 00:08:38.868 "num_base_bdevs_discovered": 1, 00:08:38.868 "num_base_bdevs_operational": 3, 00:08:38.868 "base_bdevs_list": [ 00:08:38.868 { 00:08:38.868 "name": "BaseBdev1", 00:08:38.868 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:38.868 "is_configured": true, 00:08:38.868 "data_offset": 0, 00:08:38.868 "data_size": 65536 00:08:38.868 }, 00:08:38.868 { 00:08:38.868 "name": "BaseBdev2", 00:08:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.868 "is_configured": false, 00:08:38.868 "data_offset": 0, 00:08:38.868 "data_size": 0 00:08:38.868 }, 00:08:38.868 { 00:08:38.868 "name": "BaseBdev3", 00:08:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.869 "is_configured": false, 00:08:38.869 "data_offset": 0, 00:08:38.869 "data_size": 0 00:08:38.869 } 00:08:38.869 ] 00:08:38.869 }' 00:08:38.869 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.869 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.437 [2024-10-25 17:49:57.639975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.437 [2024-10-25 17:49:57.640025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.437 [2024-10-25 17:49:57.651993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.437 [2024-10-25 17:49:57.653790] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.437 [2024-10-25 17:49:57.653848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.437 [2024-10-25 17:49:57.653859] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.437 [2024-10-25 17:49:57.653869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.437 "name": "Existed_Raid", 00:08:39.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.437 "strip_size_kb": 64, 00:08:39.437 "state": "configuring", 00:08:39.437 "raid_level": "concat", 00:08:39.437 "superblock": false, 00:08:39.437 "num_base_bdevs": 3, 00:08:39.437 "num_base_bdevs_discovered": 1, 00:08:39.437 "num_base_bdevs_operational": 3, 00:08:39.437 "base_bdevs_list": [ 00:08:39.437 { 00:08:39.437 "name": "BaseBdev1", 00:08:39.437 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:39.437 "is_configured": true, 00:08:39.437 "data_offset": 0, 00:08:39.437 "data_size": 65536 00:08:39.437 }, 00:08:39.437 { 00:08:39.437 "name": "BaseBdev2", 00:08:39.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.437 "is_configured": false, 00:08:39.437 "data_offset": 0, 00:08:39.437 "data_size": 0 00:08:39.437 }, 00:08:39.437 { 00:08:39.437 "name": "BaseBdev3", 00:08:39.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.437 "is_configured": false, 00:08:39.437 "data_offset": 0, 00:08:39.437 "data_size": 0 00:08:39.437 } 00:08:39.437 ] 00:08:39.437 }' 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.437 17:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.697 [2024-10-25 17:49:58.089316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.697 BaseBdev2 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.697 [ 00:08:39.697 { 00:08:39.697 "name": "BaseBdev2", 00:08:39.697 "aliases": [ 00:08:39.697 "3b294c11-10f5-4c84-bda6-73dfa412a5e2" 00:08:39.697 ], 00:08:39.697 "product_name": "Malloc disk", 00:08:39.697 "block_size": 512, 00:08:39.697 "num_blocks": 65536, 00:08:39.697 "uuid": "3b294c11-10f5-4c84-bda6-73dfa412a5e2", 00:08:39.697 "assigned_rate_limits": { 00:08:39.697 "rw_ios_per_sec": 0, 00:08:39.697 "rw_mbytes_per_sec": 0, 00:08:39.697 "r_mbytes_per_sec": 0, 00:08:39.697 "w_mbytes_per_sec": 0 00:08:39.697 }, 00:08:39.697 "claimed": true, 00:08:39.697 "claim_type": "exclusive_write", 00:08:39.697 "zoned": false, 00:08:39.697 "supported_io_types": { 00:08:39.697 "read": true, 00:08:39.697 "write": true, 00:08:39.697 "unmap": true, 00:08:39.697 "flush": true, 00:08:39.697 "reset": true, 00:08:39.697 "nvme_admin": false, 00:08:39.697 "nvme_io": false, 00:08:39.697 "nvme_io_md": false, 00:08:39.697 "write_zeroes": true, 00:08:39.697 "zcopy": true, 00:08:39.697 "get_zone_info": false, 00:08:39.697 "zone_management": false, 00:08:39.697 "zone_append": false, 00:08:39.697 "compare": false, 00:08:39.697 "compare_and_write": false, 00:08:39.697 "abort": true, 00:08:39.697 "seek_hole": false, 00:08:39.697 "seek_data": false, 00:08:39.697 "copy": true, 00:08:39.697 "nvme_iov_md": false 00:08:39.697 }, 00:08:39.697 "memory_domains": [ 00:08:39.697 { 00:08:39.697 "dma_device_id": "system", 00:08:39.697 "dma_device_type": 1 00:08:39.697 }, 00:08:39.697 { 00:08:39.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.697 "dma_device_type": 2 00:08:39.697 } 00:08:39.697 ], 00:08:39.697 "driver_specific": {} 00:08:39.697 } 00:08:39.697 ] 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.697 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.958 "name": "Existed_Raid", 00:08:39.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.958 "strip_size_kb": 64, 00:08:39.958 "state": "configuring", 00:08:39.958 "raid_level": "concat", 00:08:39.958 "superblock": false, 00:08:39.958 "num_base_bdevs": 3, 00:08:39.958 "num_base_bdevs_discovered": 2, 00:08:39.958 "num_base_bdevs_operational": 3, 00:08:39.958 "base_bdevs_list": [ 00:08:39.958 { 00:08:39.958 "name": "BaseBdev1", 00:08:39.958 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:39.958 "is_configured": true, 00:08:39.958 "data_offset": 0, 00:08:39.958 "data_size": 65536 00:08:39.958 }, 00:08:39.958 { 00:08:39.958 "name": "BaseBdev2", 00:08:39.958 "uuid": "3b294c11-10f5-4c84-bda6-73dfa412a5e2", 00:08:39.958 "is_configured": true, 00:08:39.958 "data_offset": 0, 00:08:39.958 "data_size": 65536 00:08:39.958 }, 00:08:39.958 { 00:08:39.958 "name": "BaseBdev3", 00:08:39.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.958 "is_configured": false, 00:08:39.958 "data_offset": 0, 00:08:39.958 "data_size": 0 00:08:39.958 } 00:08:39.958 ] 00:08:39.958 }' 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.958 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 [2024-10-25 17:49:58.650087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.220 [2024-10-25 17:49:58.650137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.220 [2024-10-25 17:49:58.650150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:40.220 [2024-10-25 17:49:58.650441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.220 [2024-10-25 17:49:58.650610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.220 [2024-10-25 17:49:58.650620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:40.220 [2024-10-25 17:49:58.650908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.220 BaseBdev3 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.481 [ 00:08:40.481 { 00:08:40.481 "name": "BaseBdev3", 00:08:40.481 "aliases": [ 00:08:40.481 "fa8f870f-a97a-48fc-8589-868ca8c8929a" 00:08:40.481 ], 00:08:40.481 "product_name": "Malloc disk", 00:08:40.481 "block_size": 512, 00:08:40.481 "num_blocks": 65536, 00:08:40.481 "uuid": "fa8f870f-a97a-48fc-8589-868ca8c8929a", 00:08:40.481 "assigned_rate_limits": { 00:08:40.481 "rw_ios_per_sec": 0, 00:08:40.481 "rw_mbytes_per_sec": 0, 00:08:40.481 "r_mbytes_per_sec": 0, 00:08:40.481 "w_mbytes_per_sec": 0 00:08:40.481 }, 00:08:40.481 "claimed": true, 00:08:40.481 "claim_type": "exclusive_write", 00:08:40.481 "zoned": false, 00:08:40.481 "supported_io_types": { 00:08:40.481 "read": true, 00:08:40.481 "write": true, 00:08:40.481 "unmap": true, 00:08:40.481 "flush": true, 00:08:40.481 "reset": true, 00:08:40.481 "nvme_admin": false, 00:08:40.481 "nvme_io": false, 00:08:40.481 "nvme_io_md": false, 00:08:40.481 "write_zeroes": true, 00:08:40.481 "zcopy": true, 00:08:40.481 "get_zone_info": false, 00:08:40.481 "zone_management": false, 00:08:40.481 "zone_append": false, 00:08:40.481 "compare": false, 00:08:40.481 "compare_and_write": false, 00:08:40.481 "abort": true, 00:08:40.481 "seek_hole": false, 00:08:40.481 "seek_data": false, 00:08:40.481 "copy": true, 00:08:40.481 "nvme_iov_md": false 00:08:40.481 }, 00:08:40.481 "memory_domains": [ 00:08:40.481 { 00:08:40.481 "dma_device_id": "system", 00:08:40.481 "dma_device_type": 1 00:08:40.481 }, 00:08:40.481 { 00:08:40.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.481 "dma_device_type": 2 00:08:40.481 } 00:08:40.481 ], 00:08:40.481 "driver_specific": {} 00:08:40.481 } 00:08:40.481 ] 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.481 "name": "Existed_Raid", 00:08:40.481 "uuid": "53d1be66-cae6-45ae-957b-32c53d9ea68a", 00:08:40.481 "strip_size_kb": 64, 00:08:40.481 "state": "online", 00:08:40.481 "raid_level": "concat", 00:08:40.481 "superblock": false, 00:08:40.481 "num_base_bdevs": 3, 00:08:40.481 "num_base_bdevs_discovered": 3, 00:08:40.481 "num_base_bdevs_operational": 3, 00:08:40.481 "base_bdevs_list": [ 00:08:40.481 { 00:08:40.481 "name": "BaseBdev1", 00:08:40.481 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:40.481 "is_configured": true, 00:08:40.481 "data_offset": 0, 00:08:40.481 "data_size": 65536 00:08:40.481 }, 00:08:40.481 { 00:08:40.481 "name": "BaseBdev2", 00:08:40.481 "uuid": "3b294c11-10f5-4c84-bda6-73dfa412a5e2", 00:08:40.481 "is_configured": true, 00:08:40.481 "data_offset": 0, 00:08:40.481 "data_size": 65536 00:08:40.481 }, 00:08:40.481 { 00:08:40.481 "name": "BaseBdev3", 00:08:40.481 "uuid": "fa8f870f-a97a-48fc-8589-868ca8c8929a", 00:08:40.481 "is_configured": true, 00:08:40.481 "data_offset": 0, 00:08:40.481 "data_size": 65536 00:08:40.481 } 00:08:40.481 ] 00:08:40.481 }' 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.481 17:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.741 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.741 [2024-10-25 17:49:59.169524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.999 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.999 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.999 "name": "Existed_Raid", 00:08:40.999 "aliases": [ 00:08:40.999 "53d1be66-cae6-45ae-957b-32c53d9ea68a" 00:08:40.999 ], 00:08:40.999 "product_name": "Raid Volume", 00:08:40.999 "block_size": 512, 00:08:41.000 "num_blocks": 196608, 00:08:41.000 "uuid": "53d1be66-cae6-45ae-957b-32c53d9ea68a", 00:08:41.000 "assigned_rate_limits": { 00:08:41.000 "rw_ios_per_sec": 0, 00:08:41.000 "rw_mbytes_per_sec": 0, 00:08:41.000 "r_mbytes_per_sec": 0, 00:08:41.000 "w_mbytes_per_sec": 0 00:08:41.000 }, 00:08:41.000 "claimed": false, 00:08:41.000 "zoned": false, 00:08:41.000 "supported_io_types": { 00:08:41.000 "read": true, 00:08:41.000 "write": true, 00:08:41.000 "unmap": true, 00:08:41.000 "flush": true, 00:08:41.000 "reset": true, 00:08:41.000 "nvme_admin": false, 00:08:41.000 "nvme_io": false, 00:08:41.000 "nvme_io_md": false, 00:08:41.000 "write_zeroes": true, 00:08:41.000 "zcopy": false, 00:08:41.000 "get_zone_info": false, 00:08:41.000 "zone_management": false, 00:08:41.000 "zone_append": false, 00:08:41.000 "compare": false, 00:08:41.000 "compare_and_write": false, 00:08:41.000 "abort": false, 00:08:41.000 "seek_hole": false, 00:08:41.000 "seek_data": false, 00:08:41.000 "copy": false, 00:08:41.000 "nvme_iov_md": false 00:08:41.000 }, 00:08:41.000 "memory_domains": [ 00:08:41.000 { 00:08:41.000 "dma_device_id": "system", 00:08:41.000 "dma_device_type": 1 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.000 "dma_device_type": 2 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "dma_device_id": "system", 00:08:41.000 "dma_device_type": 1 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.000 "dma_device_type": 2 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "dma_device_id": "system", 00:08:41.000 "dma_device_type": 1 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.000 "dma_device_type": 2 00:08:41.000 } 00:08:41.000 ], 00:08:41.000 "driver_specific": { 00:08:41.000 "raid": { 00:08:41.000 "uuid": "53d1be66-cae6-45ae-957b-32c53d9ea68a", 00:08:41.000 "strip_size_kb": 64, 00:08:41.000 "state": "online", 00:08:41.000 "raid_level": "concat", 00:08:41.000 "superblock": false, 00:08:41.000 "num_base_bdevs": 3, 00:08:41.000 "num_base_bdevs_discovered": 3, 00:08:41.000 "num_base_bdevs_operational": 3, 00:08:41.000 "base_bdevs_list": [ 00:08:41.000 { 00:08:41.000 "name": "BaseBdev1", 00:08:41.000 "uuid": "4828878b-af48-44ad-9d4b-5fd8ef652c81", 00:08:41.000 "is_configured": true, 00:08:41.000 "data_offset": 0, 00:08:41.000 "data_size": 65536 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "name": "BaseBdev2", 00:08:41.000 "uuid": "3b294c11-10f5-4c84-bda6-73dfa412a5e2", 00:08:41.000 "is_configured": true, 00:08:41.000 "data_offset": 0, 00:08:41.000 "data_size": 65536 00:08:41.000 }, 00:08:41.000 { 00:08:41.000 "name": "BaseBdev3", 00:08:41.000 "uuid": "fa8f870f-a97a-48fc-8589-868ca8c8929a", 00:08:41.000 "is_configured": true, 00:08:41.000 "data_offset": 0, 00:08:41.000 "data_size": 65536 00:08:41.000 } 00:08:41.000 ] 00:08:41.000 } 00:08:41.000 } 00:08:41.000 }' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.000 BaseBdev2 00:08:41.000 BaseBdev3' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.000 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.258 [2024-10-25 17:49:59.436895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.258 [2024-10-25 17:49:59.436964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.258 [2024-10-25 17:49:59.437037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.258 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.259 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.259 "name": "Existed_Raid", 00:08:41.259 "uuid": "53d1be66-cae6-45ae-957b-32c53d9ea68a", 00:08:41.259 "strip_size_kb": 64, 00:08:41.259 "state": "offline", 00:08:41.259 "raid_level": "concat", 00:08:41.259 "superblock": false, 00:08:41.259 "num_base_bdevs": 3, 00:08:41.259 "num_base_bdevs_discovered": 2, 00:08:41.259 "num_base_bdevs_operational": 2, 00:08:41.259 "base_bdevs_list": [ 00:08:41.259 { 00:08:41.259 "name": null, 00:08:41.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.259 "is_configured": false, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 65536 00:08:41.259 }, 00:08:41.259 { 00:08:41.259 "name": "BaseBdev2", 00:08:41.259 "uuid": "3b294c11-10f5-4c84-bda6-73dfa412a5e2", 00:08:41.259 "is_configured": true, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 65536 00:08:41.259 }, 00:08:41.259 { 00:08:41.259 "name": "BaseBdev3", 00:08:41.259 "uuid": "fa8f870f-a97a-48fc-8589-868ca8c8929a", 00:08:41.259 "is_configured": true, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 65536 00:08:41.259 } 00:08:41.259 ] 00:08:41.259 }' 00:08:41.259 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.259 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.827 17:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.827 [2024-10-25 17:50:00.028245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.827 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.827 [2024-10-25 17:50:00.173222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.827 [2024-10-25 17:50:00.173272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.088 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.088 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 BaseBdev2 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 [ 00:08:42.089 { 00:08:42.089 "name": "BaseBdev2", 00:08:42.089 "aliases": [ 00:08:42.089 "f3846460-e62e-4f6d-9cb4-17a4fe3eddad" 00:08:42.089 ], 00:08:42.089 "product_name": "Malloc disk", 00:08:42.089 "block_size": 512, 00:08:42.089 "num_blocks": 65536, 00:08:42.089 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:42.089 "assigned_rate_limits": { 00:08:42.089 "rw_ios_per_sec": 0, 00:08:42.089 "rw_mbytes_per_sec": 0, 00:08:42.089 "r_mbytes_per_sec": 0, 00:08:42.089 "w_mbytes_per_sec": 0 00:08:42.089 }, 00:08:42.089 "claimed": false, 00:08:42.089 "zoned": false, 00:08:42.089 "supported_io_types": { 00:08:42.089 "read": true, 00:08:42.089 "write": true, 00:08:42.089 "unmap": true, 00:08:42.089 "flush": true, 00:08:42.089 "reset": true, 00:08:42.089 "nvme_admin": false, 00:08:42.089 "nvme_io": false, 00:08:42.089 "nvme_io_md": false, 00:08:42.089 "write_zeroes": true, 00:08:42.089 "zcopy": true, 00:08:42.089 "get_zone_info": false, 00:08:42.089 "zone_management": false, 00:08:42.089 "zone_append": false, 00:08:42.089 "compare": false, 00:08:42.089 "compare_and_write": false, 00:08:42.089 "abort": true, 00:08:42.089 "seek_hole": false, 00:08:42.089 "seek_data": false, 00:08:42.089 "copy": true, 00:08:42.089 "nvme_iov_md": false 00:08:42.089 }, 00:08:42.089 "memory_domains": [ 00:08:42.089 { 00:08:42.089 "dma_device_id": "system", 00:08:42.089 "dma_device_type": 1 00:08:42.089 }, 00:08:42.089 { 00:08:42.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.089 "dma_device_type": 2 00:08:42.089 } 00:08:42.089 ], 00:08:42.089 "driver_specific": {} 00:08:42.089 } 00:08:42.089 ] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 BaseBdev3 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 [ 00:08:42.089 { 00:08:42.089 "name": "BaseBdev3", 00:08:42.089 "aliases": [ 00:08:42.089 "e9e058a8-476f-434f-926c-d2de138e20c6" 00:08:42.089 ], 00:08:42.089 "product_name": "Malloc disk", 00:08:42.089 "block_size": 512, 00:08:42.089 "num_blocks": 65536, 00:08:42.089 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:42.089 "assigned_rate_limits": { 00:08:42.089 "rw_ios_per_sec": 0, 00:08:42.089 "rw_mbytes_per_sec": 0, 00:08:42.089 "r_mbytes_per_sec": 0, 00:08:42.089 "w_mbytes_per_sec": 0 00:08:42.089 }, 00:08:42.089 "claimed": false, 00:08:42.089 "zoned": false, 00:08:42.089 "supported_io_types": { 00:08:42.089 "read": true, 00:08:42.089 "write": true, 00:08:42.089 "unmap": true, 00:08:42.089 "flush": true, 00:08:42.089 "reset": true, 00:08:42.089 "nvme_admin": false, 00:08:42.089 "nvme_io": false, 00:08:42.089 "nvme_io_md": false, 00:08:42.089 "write_zeroes": true, 00:08:42.089 "zcopy": true, 00:08:42.089 "get_zone_info": false, 00:08:42.089 "zone_management": false, 00:08:42.089 "zone_append": false, 00:08:42.089 "compare": false, 00:08:42.089 "compare_and_write": false, 00:08:42.089 "abort": true, 00:08:42.089 "seek_hole": false, 00:08:42.089 "seek_data": false, 00:08:42.089 "copy": true, 00:08:42.089 "nvme_iov_md": false 00:08:42.089 }, 00:08:42.089 "memory_domains": [ 00:08:42.089 { 00:08:42.089 "dma_device_id": "system", 00:08:42.089 "dma_device_type": 1 00:08:42.089 }, 00:08:42.089 { 00:08:42.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.089 "dma_device_type": 2 00:08:42.089 } 00:08:42.089 ], 00:08:42.089 "driver_specific": {} 00:08:42.089 } 00:08:42.089 ] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.089 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.089 [2024-10-25 17:50:00.470980] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.089 [2024-10-25 17:50:00.471063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.089 [2024-10-25 17:50:00.471103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.089 [2024-10-25 17:50:00.472933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.090 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.356 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.356 "name": "Existed_Raid", 00:08:42.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.357 "strip_size_kb": 64, 00:08:42.357 "state": "configuring", 00:08:42.357 "raid_level": "concat", 00:08:42.357 "superblock": false, 00:08:42.357 "num_base_bdevs": 3, 00:08:42.357 "num_base_bdevs_discovered": 2, 00:08:42.357 "num_base_bdevs_operational": 3, 00:08:42.357 "base_bdevs_list": [ 00:08:42.357 { 00:08:42.357 "name": "BaseBdev1", 00:08:42.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.357 "is_configured": false, 00:08:42.357 "data_offset": 0, 00:08:42.357 "data_size": 0 00:08:42.357 }, 00:08:42.357 { 00:08:42.357 "name": "BaseBdev2", 00:08:42.357 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:42.357 "is_configured": true, 00:08:42.357 "data_offset": 0, 00:08:42.357 "data_size": 65536 00:08:42.357 }, 00:08:42.357 { 00:08:42.357 "name": "BaseBdev3", 00:08:42.357 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:42.357 "is_configured": true, 00:08:42.357 "data_offset": 0, 00:08:42.357 "data_size": 65536 00:08:42.357 } 00:08:42.357 ] 00:08:42.357 }' 00:08:42.357 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.357 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.617 [2024-10-25 17:50:00.950150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.617 17:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.617 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.617 "name": "Existed_Raid", 00:08:42.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.617 "strip_size_kb": 64, 00:08:42.617 "state": "configuring", 00:08:42.617 "raid_level": "concat", 00:08:42.617 "superblock": false, 00:08:42.617 "num_base_bdevs": 3, 00:08:42.617 "num_base_bdevs_discovered": 1, 00:08:42.617 "num_base_bdevs_operational": 3, 00:08:42.617 "base_bdevs_list": [ 00:08:42.617 { 00:08:42.617 "name": "BaseBdev1", 00:08:42.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.617 "is_configured": false, 00:08:42.617 "data_offset": 0, 00:08:42.617 "data_size": 0 00:08:42.617 }, 00:08:42.617 { 00:08:42.617 "name": null, 00:08:42.617 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:42.617 "is_configured": false, 00:08:42.617 "data_offset": 0, 00:08:42.617 "data_size": 65536 00:08:42.617 }, 00:08:42.617 { 00:08:42.617 "name": "BaseBdev3", 00:08:42.617 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:42.617 "is_configured": true, 00:08:42.617 "data_offset": 0, 00:08:42.617 "data_size": 65536 00:08:42.617 } 00:08:42.617 ] 00:08:42.617 }' 00:08:42.617 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.617 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 [2024-10-25 17:50:01.466137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.187 BaseBdev1 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.187 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 [ 00:08:43.187 { 00:08:43.187 "name": "BaseBdev1", 00:08:43.187 "aliases": [ 00:08:43.187 "fb485b7f-6564-4e07-ac18-fc43fb237096" 00:08:43.187 ], 00:08:43.187 "product_name": "Malloc disk", 00:08:43.187 "block_size": 512, 00:08:43.187 "num_blocks": 65536, 00:08:43.187 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:43.188 "assigned_rate_limits": { 00:08:43.188 "rw_ios_per_sec": 0, 00:08:43.188 "rw_mbytes_per_sec": 0, 00:08:43.188 "r_mbytes_per_sec": 0, 00:08:43.188 "w_mbytes_per_sec": 0 00:08:43.188 }, 00:08:43.188 "claimed": true, 00:08:43.188 "claim_type": "exclusive_write", 00:08:43.188 "zoned": false, 00:08:43.188 "supported_io_types": { 00:08:43.188 "read": true, 00:08:43.188 "write": true, 00:08:43.188 "unmap": true, 00:08:43.188 "flush": true, 00:08:43.188 "reset": true, 00:08:43.188 "nvme_admin": false, 00:08:43.188 "nvme_io": false, 00:08:43.188 "nvme_io_md": false, 00:08:43.188 "write_zeroes": true, 00:08:43.188 "zcopy": true, 00:08:43.188 "get_zone_info": false, 00:08:43.188 "zone_management": false, 00:08:43.188 "zone_append": false, 00:08:43.188 "compare": false, 00:08:43.188 "compare_and_write": false, 00:08:43.188 "abort": true, 00:08:43.188 "seek_hole": false, 00:08:43.188 "seek_data": false, 00:08:43.188 "copy": true, 00:08:43.188 "nvme_iov_md": false 00:08:43.188 }, 00:08:43.188 "memory_domains": [ 00:08:43.188 { 00:08:43.188 "dma_device_id": "system", 00:08:43.188 "dma_device_type": 1 00:08:43.188 }, 00:08:43.188 { 00:08:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.188 "dma_device_type": 2 00:08:43.188 } 00:08:43.188 ], 00:08:43.188 "driver_specific": {} 00:08:43.188 } 00:08:43.188 ] 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.188 "name": "Existed_Raid", 00:08:43.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.188 "strip_size_kb": 64, 00:08:43.188 "state": "configuring", 00:08:43.188 "raid_level": "concat", 00:08:43.188 "superblock": false, 00:08:43.188 "num_base_bdevs": 3, 00:08:43.188 "num_base_bdevs_discovered": 2, 00:08:43.188 "num_base_bdevs_operational": 3, 00:08:43.188 "base_bdevs_list": [ 00:08:43.188 { 00:08:43.188 "name": "BaseBdev1", 00:08:43.188 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:43.188 "is_configured": true, 00:08:43.188 "data_offset": 0, 00:08:43.188 "data_size": 65536 00:08:43.188 }, 00:08:43.188 { 00:08:43.188 "name": null, 00:08:43.188 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:43.188 "is_configured": false, 00:08:43.188 "data_offset": 0, 00:08:43.188 "data_size": 65536 00:08:43.188 }, 00:08:43.188 { 00:08:43.188 "name": "BaseBdev3", 00:08:43.188 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:43.188 "is_configured": true, 00:08:43.188 "data_offset": 0, 00:08:43.188 "data_size": 65536 00:08:43.188 } 00:08:43.188 ] 00:08:43.188 }' 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.188 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.756 [2024-10-25 17:50:01.957351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.756 "name": "Existed_Raid", 00:08:43.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.756 "strip_size_kb": 64, 00:08:43.756 "state": "configuring", 00:08:43.756 "raid_level": "concat", 00:08:43.756 "superblock": false, 00:08:43.756 "num_base_bdevs": 3, 00:08:43.756 "num_base_bdevs_discovered": 1, 00:08:43.756 "num_base_bdevs_operational": 3, 00:08:43.756 "base_bdevs_list": [ 00:08:43.756 { 00:08:43.756 "name": "BaseBdev1", 00:08:43.756 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:43.756 "is_configured": true, 00:08:43.756 "data_offset": 0, 00:08:43.756 "data_size": 65536 00:08:43.756 }, 00:08:43.756 { 00:08:43.756 "name": null, 00:08:43.756 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:43.756 "is_configured": false, 00:08:43.756 "data_offset": 0, 00:08:43.756 "data_size": 65536 00:08:43.756 }, 00:08:43.756 { 00:08:43.756 "name": null, 00:08:43.756 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:43.756 "is_configured": false, 00:08:43.756 "data_offset": 0, 00:08:43.756 "data_size": 65536 00:08:43.756 } 00:08:43.756 ] 00:08:43.756 }' 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.756 17:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.014 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.014 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.014 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.014 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.014 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.274 [2024-10-25 17:50:02.476508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.274 "name": "Existed_Raid", 00:08:44.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.274 "strip_size_kb": 64, 00:08:44.274 "state": "configuring", 00:08:44.274 "raid_level": "concat", 00:08:44.274 "superblock": false, 00:08:44.274 "num_base_bdevs": 3, 00:08:44.274 "num_base_bdevs_discovered": 2, 00:08:44.274 "num_base_bdevs_operational": 3, 00:08:44.274 "base_bdevs_list": [ 00:08:44.274 { 00:08:44.274 "name": "BaseBdev1", 00:08:44.274 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:44.274 "is_configured": true, 00:08:44.274 "data_offset": 0, 00:08:44.274 "data_size": 65536 00:08:44.274 }, 00:08:44.274 { 00:08:44.274 "name": null, 00:08:44.274 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:44.274 "is_configured": false, 00:08:44.274 "data_offset": 0, 00:08:44.274 "data_size": 65536 00:08:44.274 }, 00:08:44.274 { 00:08:44.274 "name": "BaseBdev3", 00:08:44.274 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:44.274 "is_configured": true, 00:08:44.274 "data_offset": 0, 00:08:44.274 "data_size": 65536 00:08:44.274 } 00:08:44.274 ] 00:08:44.274 }' 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.274 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 17:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 [2024-10-25 17:50:02.955812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.794 "name": "Existed_Raid", 00:08:44.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.794 "strip_size_kb": 64, 00:08:44.794 "state": "configuring", 00:08:44.794 "raid_level": "concat", 00:08:44.794 "superblock": false, 00:08:44.794 "num_base_bdevs": 3, 00:08:44.794 "num_base_bdevs_discovered": 1, 00:08:44.794 "num_base_bdevs_operational": 3, 00:08:44.794 "base_bdevs_list": [ 00:08:44.794 { 00:08:44.794 "name": null, 00:08:44.794 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:44.794 "is_configured": false, 00:08:44.794 "data_offset": 0, 00:08:44.794 "data_size": 65536 00:08:44.794 }, 00:08:44.794 { 00:08:44.794 "name": null, 00:08:44.794 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:44.794 "is_configured": false, 00:08:44.794 "data_offset": 0, 00:08:44.794 "data_size": 65536 00:08:44.794 }, 00:08:44.794 { 00:08:44.794 "name": "BaseBdev3", 00:08:44.794 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:44.794 "is_configured": true, 00:08:44.794 "data_offset": 0, 00:08:44.794 "data_size": 65536 00:08:44.794 } 00:08:44.794 ] 00:08:44.794 }' 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.794 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.055 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.055 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 [2024-10-25 17:50:03.516180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.316 "name": "Existed_Raid", 00:08:45.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.316 "strip_size_kb": 64, 00:08:45.316 "state": "configuring", 00:08:45.316 "raid_level": "concat", 00:08:45.316 "superblock": false, 00:08:45.316 "num_base_bdevs": 3, 00:08:45.316 "num_base_bdevs_discovered": 2, 00:08:45.316 "num_base_bdevs_operational": 3, 00:08:45.316 "base_bdevs_list": [ 00:08:45.316 { 00:08:45.316 "name": null, 00:08:45.316 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:45.316 "is_configured": false, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "name": "BaseBdev2", 00:08:45.316 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:45.316 "is_configured": true, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "name": "BaseBdev3", 00:08:45.316 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:45.316 "is_configured": true, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 } 00:08:45.316 ] 00:08:45.316 }' 00:08:45.316 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.316 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.574 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.574 17:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 17:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.574 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb485b7f-6564-4e07-ac18-fc43fb237096 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.835 [2024-10-25 17:50:04.127902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:45.835 [2024-10-25 17:50:04.127939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:45.835 [2024-10-25 17:50:04.127948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:45.835 [2024-10-25 17:50:04.128188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:45.835 [2024-10-25 17:50:04.128364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:45.835 [2024-10-25 17:50:04.128374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:45.835 [2024-10-25 17:50:04.128632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.835 NewBaseBdev 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.835 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.835 [ 00:08:45.835 { 00:08:45.835 "name": "NewBaseBdev", 00:08:45.835 "aliases": [ 00:08:45.835 "fb485b7f-6564-4e07-ac18-fc43fb237096" 00:08:45.835 ], 00:08:45.835 "product_name": "Malloc disk", 00:08:45.835 "block_size": 512, 00:08:45.835 "num_blocks": 65536, 00:08:45.835 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:45.835 "assigned_rate_limits": { 00:08:45.835 "rw_ios_per_sec": 0, 00:08:45.835 "rw_mbytes_per_sec": 0, 00:08:45.835 "r_mbytes_per_sec": 0, 00:08:45.835 "w_mbytes_per_sec": 0 00:08:45.835 }, 00:08:45.835 "claimed": true, 00:08:45.836 "claim_type": "exclusive_write", 00:08:45.836 "zoned": false, 00:08:45.836 "supported_io_types": { 00:08:45.836 "read": true, 00:08:45.836 "write": true, 00:08:45.836 "unmap": true, 00:08:45.836 "flush": true, 00:08:45.836 "reset": true, 00:08:45.836 "nvme_admin": false, 00:08:45.836 "nvme_io": false, 00:08:45.836 "nvme_io_md": false, 00:08:45.836 "write_zeroes": true, 00:08:45.836 "zcopy": true, 00:08:45.836 "get_zone_info": false, 00:08:45.836 "zone_management": false, 00:08:45.836 "zone_append": false, 00:08:45.836 "compare": false, 00:08:45.836 "compare_and_write": false, 00:08:45.836 "abort": true, 00:08:45.836 "seek_hole": false, 00:08:45.836 "seek_data": false, 00:08:45.836 "copy": true, 00:08:45.836 "nvme_iov_md": false 00:08:45.836 }, 00:08:45.836 "memory_domains": [ 00:08:45.836 { 00:08:45.836 "dma_device_id": "system", 00:08:45.836 "dma_device_type": 1 00:08:45.836 }, 00:08:45.836 { 00:08:45.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.836 "dma_device_type": 2 00:08:45.836 } 00:08:45.836 ], 00:08:45.836 "driver_specific": {} 00:08:45.836 } 00:08:45.836 ] 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.836 "name": "Existed_Raid", 00:08:45.836 "uuid": "99ce1d90-1931-4843-ac07-b9dddd32a2eb", 00:08:45.836 "strip_size_kb": 64, 00:08:45.836 "state": "online", 00:08:45.836 "raid_level": "concat", 00:08:45.836 "superblock": false, 00:08:45.836 "num_base_bdevs": 3, 00:08:45.836 "num_base_bdevs_discovered": 3, 00:08:45.836 "num_base_bdevs_operational": 3, 00:08:45.836 "base_bdevs_list": [ 00:08:45.836 { 00:08:45.836 "name": "NewBaseBdev", 00:08:45.836 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:45.836 "is_configured": true, 00:08:45.836 "data_offset": 0, 00:08:45.836 "data_size": 65536 00:08:45.836 }, 00:08:45.836 { 00:08:45.836 "name": "BaseBdev2", 00:08:45.836 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:45.836 "is_configured": true, 00:08:45.836 "data_offset": 0, 00:08:45.836 "data_size": 65536 00:08:45.836 }, 00:08:45.836 { 00:08:45.836 "name": "BaseBdev3", 00:08:45.836 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:45.836 "is_configured": true, 00:08:45.836 "data_offset": 0, 00:08:45.836 "data_size": 65536 00:08:45.836 } 00:08:45.836 ] 00:08:45.836 }' 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.836 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.406 [2024-10-25 17:50:04.603408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.406 "name": "Existed_Raid", 00:08:46.406 "aliases": [ 00:08:46.406 "99ce1d90-1931-4843-ac07-b9dddd32a2eb" 00:08:46.406 ], 00:08:46.406 "product_name": "Raid Volume", 00:08:46.406 "block_size": 512, 00:08:46.406 "num_blocks": 196608, 00:08:46.406 "uuid": "99ce1d90-1931-4843-ac07-b9dddd32a2eb", 00:08:46.406 "assigned_rate_limits": { 00:08:46.406 "rw_ios_per_sec": 0, 00:08:46.406 "rw_mbytes_per_sec": 0, 00:08:46.406 "r_mbytes_per_sec": 0, 00:08:46.406 "w_mbytes_per_sec": 0 00:08:46.406 }, 00:08:46.406 "claimed": false, 00:08:46.406 "zoned": false, 00:08:46.406 "supported_io_types": { 00:08:46.406 "read": true, 00:08:46.406 "write": true, 00:08:46.406 "unmap": true, 00:08:46.406 "flush": true, 00:08:46.406 "reset": true, 00:08:46.406 "nvme_admin": false, 00:08:46.406 "nvme_io": false, 00:08:46.406 "nvme_io_md": false, 00:08:46.406 "write_zeroes": true, 00:08:46.406 "zcopy": false, 00:08:46.406 "get_zone_info": false, 00:08:46.406 "zone_management": false, 00:08:46.406 "zone_append": false, 00:08:46.406 "compare": false, 00:08:46.406 "compare_and_write": false, 00:08:46.406 "abort": false, 00:08:46.406 "seek_hole": false, 00:08:46.406 "seek_data": false, 00:08:46.406 "copy": false, 00:08:46.406 "nvme_iov_md": false 00:08:46.406 }, 00:08:46.406 "memory_domains": [ 00:08:46.406 { 00:08:46.406 "dma_device_id": "system", 00:08:46.406 "dma_device_type": 1 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.406 "dma_device_type": 2 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "dma_device_id": "system", 00:08:46.406 "dma_device_type": 1 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.406 "dma_device_type": 2 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "dma_device_id": "system", 00:08:46.406 "dma_device_type": 1 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.406 "dma_device_type": 2 00:08:46.406 } 00:08:46.406 ], 00:08:46.406 "driver_specific": { 00:08:46.406 "raid": { 00:08:46.406 "uuid": "99ce1d90-1931-4843-ac07-b9dddd32a2eb", 00:08:46.406 "strip_size_kb": 64, 00:08:46.406 "state": "online", 00:08:46.406 "raid_level": "concat", 00:08:46.406 "superblock": false, 00:08:46.406 "num_base_bdevs": 3, 00:08:46.406 "num_base_bdevs_discovered": 3, 00:08:46.406 "num_base_bdevs_operational": 3, 00:08:46.406 "base_bdevs_list": [ 00:08:46.406 { 00:08:46.406 "name": "NewBaseBdev", 00:08:46.406 "uuid": "fb485b7f-6564-4e07-ac18-fc43fb237096", 00:08:46.406 "is_configured": true, 00:08:46.406 "data_offset": 0, 00:08:46.406 "data_size": 65536 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "name": "BaseBdev2", 00:08:46.406 "uuid": "f3846460-e62e-4f6d-9cb4-17a4fe3eddad", 00:08:46.406 "is_configured": true, 00:08:46.406 "data_offset": 0, 00:08:46.406 "data_size": 65536 00:08:46.406 }, 00:08:46.406 { 00:08:46.406 "name": "BaseBdev3", 00:08:46.406 "uuid": "e9e058a8-476f-434f-926c-d2de138e20c6", 00:08:46.406 "is_configured": true, 00:08:46.406 "data_offset": 0, 00:08:46.406 "data_size": 65536 00:08:46.406 } 00:08:46.406 ] 00:08:46.406 } 00:08:46.406 } 00:08:46.406 }' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:46.406 BaseBdev2 00:08:46.406 BaseBdev3' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.406 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.666 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 [2024-10-25 17:50:04.902586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.667 [2024-10-25 17:50:04.902654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.667 [2024-10-25 17:50:04.902729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.667 [2024-10-25 17:50:04.902782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.667 [2024-10-25 17:50:04.902810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65381 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65381 ']' 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65381 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65381 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.667 killing process with pid 65381 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65381' 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65381 00:08:46.667 [2024-10-25 17:50:04.956050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.667 17:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65381 00:08:46.927 [2024-10-25 17:50:05.240735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.866 17:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.866 00:08:47.866 real 0m10.552s 00:08:47.866 user 0m16.821s 00:08:47.866 sys 0m1.950s 00:08:47.866 17:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.866 17:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.866 ************************************ 00:08:47.866 END TEST raid_state_function_test 00:08:47.866 ************************************ 00:08:48.127 17:50:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:48.127 17:50:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:48.127 17:50:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.127 17:50:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.127 ************************************ 00:08:48.127 START TEST raid_state_function_test_sb 00:08:48.127 ************************************ 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66002 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66002' 00:08:48.127 Process raid pid: 66002 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66002 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66002 ']' 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.127 17:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.127 [2024-10-25 17:50:06.463998] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:48.127 [2024-10-25 17:50:06.464145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.387 [2024-10-25 17:50:06.642747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.388 [2024-10-25 17:50:06.749350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.648 [2024-10-25 17:50:06.942265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.648 [2024-10-25 17:50:06.942306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.908 [2024-10-25 17:50:07.314027] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.908 [2024-10-25 17:50:07.314081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.908 [2024-10-25 17:50:07.314092] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.908 [2024-10-25 17:50:07.314101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.908 [2024-10-25 17:50:07.314107] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.908 [2024-10-25 17:50:07.314116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.908 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.909 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.169 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.169 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.169 "name": "Existed_Raid", 00:08:49.169 "uuid": "d2c3bb2b-b8c9-444f-9e1b-4ad60d520f5d", 00:08:49.169 "strip_size_kb": 64, 00:08:49.169 "state": "configuring", 00:08:49.169 "raid_level": "concat", 00:08:49.169 "superblock": true, 00:08:49.169 "num_base_bdevs": 3, 00:08:49.169 "num_base_bdevs_discovered": 0, 00:08:49.169 "num_base_bdevs_operational": 3, 00:08:49.169 "base_bdevs_list": [ 00:08:49.169 { 00:08:49.169 "name": "BaseBdev1", 00:08:49.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.169 "is_configured": false, 00:08:49.169 "data_offset": 0, 00:08:49.169 "data_size": 0 00:08:49.169 }, 00:08:49.169 { 00:08:49.169 "name": "BaseBdev2", 00:08:49.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.169 "is_configured": false, 00:08:49.169 "data_offset": 0, 00:08:49.169 "data_size": 0 00:08:49.169 }, 00:08:49.169 { 00:08:49.169 "name": "BaseBdev3", 00:08:49.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.169 "is_configured": false, 00:08:49.169 "data_offset": 0, 00:08:49.169 "data_size": 0 00:08:49.169 } 00:08:49.169 ] 00:08:49.169 }' 00:08:49.169 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.169 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 [2024-10-25 17:50:07.737234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.429 [2024-10-25 17:50:07.737270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 [2024-10-25 17:50:07.745231] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.429 [2024-10-25 17:50:07.745276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.429 [2024-10-25 17:50:07.745284] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.429 [2024-10-25 17:50:07.745306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.429 [2024-10-25 17:50:07.745312] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.429 [2024-10-25 17:50:07.745321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 [2024-10-25 17:50:07.788280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.429 BaseBdev1 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.429 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.429 [ 00:08:49.429 { 00:08:49.429 "name": "BaseBdev1", 00:08:49.429 "aliases": [ 00:08:49.429 "b205e2d5-7df8-467c-b26a-c9031f33f2ed" 00:08:49.429 ], 00:08:49.429 "product_name": "Malloc disk", 00:08:49.429 "block_size": 512, 00:08:49.429 "num_blocks": 65536, 00:08:49.429 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:49.429 "assigned_rate_limits": { 00:08:49.429 "rw_ios_per_sec": 0, 00:08:49.429 "rw_mbytes_per_sec": 0, 00:08:49.429 "r_mbytes_per_sec": 0, 00:08:49.429 "w_mbytes_per_sec": 0 00:08:49.429 }, 00:08:49.429 "claimed": true, 00:08:49.429 "claim_type": "exclusive_write", 00:08:49.429 "zoned": false, 00:08:49.429 "supported_io_types": { 00:08:49.429 "read": true, 00:08:49.429 "write": true, 00:08:49.429 "unmap": true, 00:08:49.430 "flush": true, 00:08:49.430 "reset": true, 00:08:49.430 "nvme_admin": false, 00:08:49.430 "nvme_io": false, 00:08:49.430 "nvme_io_md": false, 00:08:49.430 "write_zeroes": true, 00:08:49.430 "zcopy": true, 00:08:49.430 "get_zone_info": false, 00:08:49.430 "zone_management": false, 00:08:49.430 "zone_append": false, 00:08:49.430 "compare": false, 00:08:49.430 "compare_and_write": false, 00:08:49.430 "abort": true, 00:08:49.430 "seek_hole": false, 00:08:49.430 "seek_data": false, 00:08:49.430 "copy": true, 00:08:49.430 "nvme_iov_md": false 00:08:49.430 }, 00:08:49.430 "memory_domains": [ 00:08:49.430 { 00:08:49.430 "dma_device_id": "system", 00:08:49.430 "dma_device_type": 1 00:08:49.430 }, 00:08:49.430 { 00:08:49.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.430 "dma_device_type": 2 00:08:49.430 } 00:08:49.430 ], 00:08:49.430 "driver_specific": {} 00:08:49.430 } 00:08:49.430 ] 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.430 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.690 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.690 "name": "Existed_Raid", 00:08:49.690 "uuid": "ccc5d45d-b67d-4da4-9ec2-7c49092b9cfd", 00:08:49.690 "strip_size_kb": 64, 00:08:49.690 "state": "configuring", 00:08:49.690 "raid_level": "concat", 00:08:49.690 "superblock": true, 00:08:49.690 "num_base_bdevs": 3, 00:08:49.690 "num_base_bdevs_discovered": 1, 00:08:49.690 "num_base_bdevs_operational": 3, 00:08:49.690 "base_bdevs_list": [ 00:08:49.690 { 00:08:49.690 "name": "BaseBdev1", 00:08:49.690 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:49.690 "is_configured": true, 00:08:49.690 "data_offset": 2048, 00:08:49.690 "data_size": 63488 00:08:49.690 }, 00:08:49.690 { 00:08:49.690 "name": "BaseBdev2", 00:08:49.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.690 "is_configured": false, 00:08:49.690 "data_offset": 0, 00:08:49.690 "data_size": 0 00:08:49.690 }, 00:08:49.690 { 00:08:49.690 "name": "BaseBdev3", 00:08:49.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.690 "is_configured": false, 00:08:49.690 "data_offset": 0, 00:08:49.690 "data_size": 0 00:08:49.690 } 00:08:49.690 ] 00:08:49.690 }' 00:08:49.690 17:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.690 17:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.951 [2024-10-25 17:50:08.223629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.951 [2024-10-25 17:50:08.223715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.951 [2024-10-25 17:50:08.231673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.951 [2024-10-25 17:50:08.233525] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.951 [2024-10-25 17:50:08.233610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.951 [2024-10-25 17:50:08.233638] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.951 [2024-10-25 17:50:08.233660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.951 "name": "Existed_Raid", 00:08:49.951 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:49.951 "strip_size_kb": 64, 00:08:49.951 "state": "configuring", 00:08:49.951 "raid_level": "concat", 00:08:49.951 "superblock": true, 00:08:49.951 "num_base_bdevs": 3, 00:08:49.951 "num_base_bdevs_discovered": 1, 00:08:49.951 "num_base_bdevs_operational": 3, 00:08:49.951 "base_bdevs_list": [ 00:08:49.951 { 00:08:49.951 "name": "BaseBdev1", 00:08:49.951 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:49.951 "is_configured": true, 00:08:49.951 "data_offset": 2048, 00:08:49.951 "data_size": 63488 00:08:49.951 }, 00:08:49.951 { 00:08:49.951 "name": "BaseBdev2", 00:08:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.951 "is_configured": false, 00:08:49.951 "data_offset": 0, 00:08:49.951 "data_size": 0 00:08:49.951 }, 00:08:49.951 { 00:08:49.951 "name": "BaseBdev3", 00:08:49.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.951 "is_configured": false, 00:08:49.951 "data_offset": 0, 00:08:49.951 "data_size": 0 00:08:49.951 } 00:08:49.951 ] 00:08:49.951 }' 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.951 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.522 [2024-10-25 17:50:08.724001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.522 BaseBdev2 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.522 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 [ 00:08:50.523 { 00:08:50.523 "name": "BaseBdev2", 00:08:50.523 "aliases": [ 00:08:50.523 "bd33e0b4-f366-49f1-bdbc-1b1c605c8278" 00:08:50.523 ], 00:08:50.523 "product_name": "Malloc disk", 00:08:50.523 "block_size": 512, 00:08:50.523 "num_blocks": 65536, 00:08:50.523 "uuid": "bd33e0b4-f366-49f1-bdbc-1b1c605c8278", 00:08:50.523 "assigned_rate_limits": { 00:08:50.523 "rw_ios_per_sec": 0, 00:08:50.523 "rw_mbytes_per_sec": 0, 00:08:50.523 "r_mbytes_per_sec": 0, 00:08:50.523 "w_mbytes_per_sec": 0 00:08:50.523 }, 00:08:50.523 "claimed": true, 00:08:50.523 "claim_type": "exclusive_write", 00:08:50.523 "zoned": false, 00:08:50.523 "supported_io_types": { 00:08:50.523 "read": true, 00:08:50.523 "write": true, 00:08:50.523 "unmap": true, 00:08:50.523 "flush": true, 00:08:50.523 "reset": true, 00:08:50.523 "nvme_admin": false, 00:08:50.523 "nvme_io": false, 00:08:50.523 "nvme_io_md": false, 00:08:50.523 "write_zeroes": true, 00:08:50.523 "zcopy": true, 00:08:50.523 "get_zone_info": false, 00:08:50.523 "zone_management": false, 00:08:50.523 "zone_append": false, 00:08:50.523 "compare": false, 00:08:50.523 "compare_and_write": false, 00:08:50.523 "abort": true, 00:08:50.523 "seek_hole": false, 00:08:50.523 "seek_data": false, 00:08:50.523 "copy": true, 00:08:50.523 "nvme_iov_md": false 00:08:50.523 }, 00:08:50.523 "memory_domains": [ 00:08:50.523 { 00:08:50.523 "dma_device_id": "system", 00:08:50.523 "dma_device_type": 1 00:08:50.523 }, 00:08:50.523 { 00:08:50.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.523 "dma_device_type": 2 00:08:50.523 } 00:08:50.523 ], 00:08:50.523 "driver_specific": {} 00:08:50.523 } 00:08:50.523 ] 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.523 "name": "Existed_Raid", 00:08:50.523 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:50.523 "strip_size_kb": 64, 00:08:50.523 "state": "configuring", 00:08:50.523 "raid_level": "concat", 00:08:50.523 "superblock": true, 00:08:50.523 "num_base_bdevs": 3, 00:08:50.523 "num_base_bdevs_discovered": 2, 00:08:50.523 "num_base_bdevs_operational": 3, 00:08:50.523 "base_bdevs_list": [ 00:08:50.523 { 00:08:50.523 "name": "BaseBdev1", 00:08:50.523 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:50.523 "is_configured": true, 00:08:50.523 "data_offset": 2048, 00:08:50.523 "data_size": 63488 00:08:50.523 }, 00:08:50.523 { 00:08:50.523 "name": "BaseBdev2", 00:08:50.523 "uuid": "bd33e0b4-f366-49f1-bdbc-1b1c605c8278", 00:08:50.523 "is_configured": true, 00:08:50.523 "data_offset": 2048, 00:08:50.523 "data_size": 63488 00:08:50.523 }, 00:08:50.523 { 00:08:50.523 "name": "BaseBdev3", 00:08:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.523 "is_configured": false, 00:08:50.523 "data_offset": 0, 00:08:50.523 "data_size": 0 00:08:50.523 } 00:08:50.523 ] 00:08:50.523 }' 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.523 17:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.093 [2024-10-25 17:50:09.294372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.093 [2024-10-25 17:50:09.294639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.093 [2024-10-25 17:50:09.294663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.093 [2024-10-25 17:50:09.294948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:51.093 [2024-10-25 17:50:09.295096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.093 [2024-10-25 17:50:09.295106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:51.093 [2024-10-25 17:50:09.295260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.093 BaseBdev3 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.093 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.093 [ 00:08:51.093 { 00:08:51.093 "name": "BaseBdev3", 00:08:51.093 "aliases": [ 00:08:51.093 "c2539c09-33be-4b2d-9fbb-ccef1cda38fd" 00:08:51.093 ], 00:08:51.093 "product_name": "Malloc disk", 00:08:51.093 "block_size": 512, 00:08:51.093 "num_blocks": 65536, 00:08:51.093 "uuid": "c2539c09-33be-4b2d-9fbb-ccef1cda38fd", 00:08:51.093 "assigned_rate_limits": { 00:08:51.093 "rw_ios_per_sec": 0, 00:08:51.093 "rw_mbytes_per_sec": 0, 00:08:51.093 "r_mbytes_per_sec": 0, 00:08:51.093 "w_mbytes_per_sec": 0 00:08:51.093 }, 00:08:51.093 "claimed": true, 00:08:51.093 "claim_type": "exclusive_write", 00:08:51.093 "zoned": false, 00:08:51.093 "supported_io_types": { 00:08:51.093 "read": true, 00:08:51.093 "write": true, 00:08:51.093 "unmap": true, 00:08:51.093 "flush": true, 00:08:51.093 "reset": true, 00:08:51.093 "nvme_admin": false, 00:08:51.093 "nvme_io": false, 00:08:51.093 "nvme_io_md": false, 00:08:51.093 "write_zeroes": true, 00:08:51.093 "zcopy": true, 00:08:51.093 "get_zone_info": false, 00:08:51.093 "zone_management": false, 00:08:51.093 "zone_append": false, 00:08:51.093 "compare": false, 00:08:51.093 "compare_and_write": false, 00:08:51.093 "abort": true, 00:08:51.093 "seek_hole": false, 00:08:51.094 "seek_data": false, 00:08:51.094 "copy": true, 00:08:51.094 "nvme_iov_md": false 00:08:51.094 }, 00:08:51.094 "memory_domains": [ 00:08:51.094 { 00:08:51.094 "dma_device_id": "system", 00:08:51.094 "dma_device_type": 1 00:08:51.094 }, 00:08:51.094 { 00:08:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.094 "dma_device_type": 2 00:08:51.094 } 00:08:51.094 ], 00:08:51.094 "driver_specific": {} 00:08:51.094 } 00:08:51.094 ] 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.094 "name": "Existed_Raid", 00:08:51.094 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:51.094 "strip_size_kb": 64, 00:08:51.094 "state": "online", 00:08:51.094 "raid_level": "concat", 00:08:51.094 "superblock": true, 00:08:51.094 "num_base_bdevs": 3, 00:08:51.094 "num_base_bdevs_discovered": 3, 00:08:51.094 "num_base_bdevs_operational": 3, 00:08:51.094 "base_bdevs_list": [ 00:08:51.094 { 00:08:51.094 "name": "BaseBdev1", 00:08:51.094 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:51.094 "is_configured": true, 00:08:51.094 "data_offset": 2048, 00:08:51.094 "data_size": 63488 00:08:51.094 }, 00:08:51.094 { 00:08:51.094 "name": "BaseBdev2", 00:08:51.094 "uuid": "bd33e0b4-f366-49f1-bdbc-1b1c605c8278", 00:08:51.094 "is_configured": true, 00:08:51.094 "data_offset": 2048, 00:08:51.094 "data_size": 63488 00:08:51.094 }, 00:08:51.094 { 00:08:51.094 "name": "BaseBdev3", 00:08:51.094 "uuid": "c2539c09-33be-4b2d-9fbb-ccef1cda38fd", 00:08:51.094 "is_configured": true, 00:08:51.094 "data_offset": 2048, 00:08:51.094 "data_size": 63488 00:08:51.094 } 00:08:51.094 ] 00:08:51.094 }' 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.094 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.363 [2024-10-25 17:50:09.749931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.363 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.363 "name": "Existed_Raid", 00:08:51.363 "aliases": [ 00:08:51.363 "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb" 00:08:51.363 ], 00:08:51.363 "product_name": "Raid Volume", 00:08:51.363 "block_size": 512, 00:08:51.363 "num_blocks": 190464, 00:08:51.363 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:51.363 "assigned_rate_limits": { 00:08:51.363 "rw_ios_per_sec": 0, 00:08:51.363 "rw_mbytes_per_sec": 0, 00:08:51.363 "r_mbytes_per_sec": 0, 00:08:51.363 "w_mbytes_per_sec": 0 00:08:51.363 }, 00:08:51.363 "claimed": false, 00:08:51.363 "zoned": false, 00:08:51.363 "supported_io_types": { 00:08:51.364 "read": true, 00:08:51.364 "write": true, 00:08:51.364 "unmap": true, 00:08:51.364 "flush": true, 00:08:51.364 "reset": true, 00:08:51.364 "nvme_admin": false, 00:08:51.364 "nvme_io": false, 00:08:51.364 "nvme_io_md": false, 00:08:51.364 "write_zeroes": true, 00:08:51.364 "zcopy": false, 00:08:51.364 "get_zone_info": false, 00:08:51.364 "zone_management": false, 00:08:51.364 "zone_append": false, 00:08:51.364 "compare": false, 00:08:51.364 "compare_and_write": false, 00:08:51.364 "abort": false, 00:08:51.364 "seek_hole": false, 00:08:51.364 "seek_data": false, 00:08:51.364 "copy": false, 00:08:51.364 "nvme_iov_md": false 00:08:51.364 }, 00:08:51.364 "memory_domains": [ 00:08:51.364 { 00:08:51.364 "dma_device_id": "system", 00:08:51.364 "dma_device_type": 1 00:08:51.364 }, 00:08:51.364 { 00:08:51.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.364 "dma_device_type": 2 00:08:51.364 }, 00:08:51.364 { 00:08:51.364 "dma_device_id": "system", 00:08:51.364 "dma_device_type": 1 00:08:51.364 }, 00:08:51.364 { 00:08:51.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.364 "dma_device_type": 2 00:08:51.364 }, 00:08:51.364 { 00:08:51.364 "dma_device_id": "system", 00:08:51.364 "dma_device_type": 1 00:08:51.365 }, 00:08:51.365 { 00:08:51.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.365 "dma_device_type": 2 00:08:51.365 } 00:08:51.365 ], 00:08:51.365 "driver_specific": { 00:08:51.365 "raid": { 00:08:51.365 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:51.365 "strip_size_kb": 64, 00:08:51.365 "state": "online", 00:08:51.365 "raid_level": "concat", 00:08:51.365 "superblock": true, 00:08:51.365 "num_base_bdevs": 3, 00:08:51.365 "num_base_bdevs_discovered": 3, 00:08:51.365 "num_base_bdevs_operational": 3, 00:08:51.365 "base_bdevs_list": [ 00:08:51.365 { 00:08:51.365 "name": "BaseBdev1", 00:08:51.365 "uuid": "b205e2d5-7df8-467c-b26a-c9031f33f2ed", 00:08:51.365 "is_configured": true, 00:08:51.365 "data_offset": 2048, 00:08:51.366 "data_size": 63488 00:08:51.366 }, 00:08:51.366 { 00:08:51.366 "name": "BaseBdev2", 00:08:51.366 "uuid": "bd33e0b4-f366-49f1-bdbc-1b1c605c8278", 00:08:51.366 "is_configured": true, 00:08:51.366 "data_offset": 2048, 00:08:51.366 "data_size": 63488 00:08:51.366 }, 00:08:51.366 { 00:08:51.366 "name": "BaseBdev3", 00:08:51.366 "uuid": "c2539c09-33be-4b2d-9fbb-ccef1cda38fd", 00:08:51.366 "is_configured": true, 00:08:51.366 "data_offset": 2048, 00:08:51.366 "data_size": 63488 00:08:51.366 } 00:08:51.366 ] 00:08:51.366 } 00:08:51.366 } 00:08:51.366 }' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:51.640 BaseBdev2 00:08:51.640 BaseBdev3' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.640 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.641 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.641 17:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.641 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.641 17:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.641 [2024-10-25 17:50:09.973285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.641 [2024-10-25 17:50:09.973352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.641 [2024-10-25 17:50:09.973409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.641 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.911 "name": "Existed_Raid", 00:08:51.911 "uuid": "ab20ab1e-0e1a-43ef-9510-5b5e5b780acb", 00:08:51.911 "strip_size_kb": 64, 00:08:51.911 "state": "offline", 00:08:51.911 "raid_level": "concat", 00:08:51.911 "superblock": true, 00:08:51.911 "num_base_bdevs": 3, 00:08:51.911 "num_base_bdevs_discovered": 2, 00:08:51.911 "num_base_bdevs_operational": 2, 00:08:51.911 "base_bdevs_list": [ 00:08:51.911 { 00:08:51.911 "name": null, 00:08:51.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.911 "is_configured": false, 00:08:51.911 "data_offset": 0, 00:08:51.911 "data_size": 63488 00:08:51.911 }, 00:08:51.911 { 00:08:51.911 "name": "BaseBdev2", 00:08:51.911 "uuid": "bd33e0b4-f366-49f1-bdbc-1b1c605c8278", 00:08:51.911 "is_configured": true, 00:08:51.911 "data_offset": 2048, 00:08:51.911 "data_size": 63488 00:08:51.911 }, 00:08:51.911 { 00:08:51.911 "name": "BaseBdev3", 00:08:51.911 "uuid": "c2539c09-33be-4b2d-9fbb-ccef1cda38fd", 00:08:51.911 "is_configured": true, 00:08:51.911 "data_offset": 2048, 00:08:51.911 "data_size": 63488 00:08:51.911 } 00:08:51.911 ] 00:08:51.911 }' 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.911 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.171 [2024-10-25 17:50:10.497135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.171 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 [2024-10-25 17:50:10.665491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.432 [2024-10-25 17:50:10.665588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 BaseBdev2 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.432 [ 00:08:52.432 { 00:08:52.432 "name": "BaseBdev2", 00:08:52.432 "aliases": [ 00:08:52.432 "309ef86f-b9e0-4916-803c-0d35b1c286c9" 00:08:52.432 ], 00:08:52.432 "product_name": "Malloc disk", 00:08:52.432 "block_size": 512, 00:08:52.432 "num_blocks": 65536, 00:08:52.432 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:52.432 "assigned_rate_limits": { 00:08:52.432 "rw_ios_per_sec": 0, 00:08:52.432 "rw_mbytes_per_sec": 0, 00:08:52.432 "r_mbytes_per_sec": 0, 00:08:52.432 "w_mbytes_per_sec": 0 00:08:52.432 }, 00:08:52.432 "claimed": false, 00:08:52.432 "zoned": false, 00:08:52.432 "supported_io_types": { 00:08:52.432 "read": true, 00:08:52.432 "write": true, 00:08:52.432 "unmap": true, 00:08:52.432 "flush": true, 00:08:52.432 "reset": true, 00:08:52.432 "nvme_admin": false, 00:08:52.432 "nvme_io": false, 00:08:52.432 "nvme_io_md": false, 00:08:52.432 "write_zeroes": true, 00:08:52.432 "zcopy": true, 00:08:52.432 "get_zone_info": false, 00:08:52.432 "zone_management": false, 00:08:52.432 "zone_append": false, 00:08:52.432 "compare": false, 00:08:52.432 "compare_and_write": false, 00:08:52.432 "abort": true, 00:08:52.432 "seek_hole": false, 00:08:52.432 "seek_data": false, 00:08:52.432 "copy": true, 00:08:52.432 "nvme_iov_md": false 00:08:52.432 }, 00:08:52.432 "memory_domains": [ 00:08:52.432 { 00:08:52.432 "dma_device_id": "system", 00:08:52.432 "dma_device_type": 1 00:08:52.432 }, 00:08:52.432 { 00:08:52.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.432 "dma_device_type": 2 00:08:52.432 } 00:08:52.432 ], 00:08:52.432 "driver_specific": {} 00:08:52.432 } 00:08:52.432 ] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.432 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.693 BaseBdev3 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.693 [ 00:08:52.693 { 00:08:52.693 "name": "BaseBdev3", 00:08:52.693 "aliases": [ 00:08:52.693 "6f13147b-0e82-48f2-a894-8fee0181a173" 00:08:52.693 ], 00:08:52.693 "product_name": "Malloc disk", 00:08:52.693 "block_size": 512, 00:08:52.693 "num_blocks": 65536, 00:08:52.693 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:52.693 "assigned_rate_limits": { 00:08:52.693 "rw_ios_per_sec": 0, 00:08:52.693 "rw_mbytes_per_sec": 0, 00:08:52.693 "r_mbytes_per_sec": 0, 00:08:52.693 "w_mbytes_per_sec": 0 00:08:52.693 }, 00:08:52.693 "claimed": false, 00:08:52.693 "zoned": false, 00:08:52.693 "supported_io_types": { 00:08:52.693 "read": true, 00:08:52.693 "write": true, 00:08:52.693 "unmap": true, 00:08:52.693 "flush": true, 00:08:52.693 "reset": true, 00:08:52.693 "nvme_admin": false, 00:08:52.693 "nvme_io": false, 00:08:52.693 "nvme_io_md": false, 00:08:52.693 "write_zeroes": true, 00:08:52.693 "zcopy": true, 00:08:52.693 "get_zone_info": false, 00:08:52.693 "zone_management": false, 00:08:52.693 "zone_append": false, 00:08:52.693 "compare": false, 00:08:52.693 "compare_and_write": false, 00:08:52.693 "abort": true, 00:08:52.693 "seek_hole": false, 00:08:52.693 "seek_data": false, 00:08:52.693 "copy": true, 00:08:52.693 "nvme_iov_md": false 00:08:52.693 }, 00:08:52.693 "memory_domains": [ 00:08:52.693 { 00:08:52.693 "dma_device_id": "system", 00:08:52.693 "dma_device_type": 1 00:08:52.693 }, 00:08:52.693 { 00:08:52.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.693 "dma_device_type": 2 00:08:52.693 } 00:08:52.693 ], 00:08:52.693 "driver_specific": {} 00:08:52.693 } 00:08:52.693 ] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.693 [2024-10-25 17:50:10.942418] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.693 [2024-10-25 17:50:10.942508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.693 [2024-10-25 17:50:10.942552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.693 [2024-10-25 17:50:10.944383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.693 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.693 "name": "Existed_Raid", 00:08:52.693 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:52.693 "strip_size_kb": 64, 00:08:52.693 "state": "configuring", 00:08:52.693 "raid_level": "concat", 00:08:52.693 "superblock": true, 00:08:52.693 "num_base_bdevs": 3, 00:08:52.693 "num_base_bdevs_discovered": 2, 00:08:52.693 "num_base_bdevs_operational": 3, 00:08:52.693 "base_bdevs_list": [ 00:08:52.693 { 00:08:52.693 "name": "BaseBdev1", 00:08:52.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.693 "is_configured": false, 00:08:52.693 "data_offset": 0, 00:08:52.693 "data_size": 0 00:08:52.693 }, 00:08:52.693 { 00:08:52.693 "name": "BaseBdev2", 00:08:52.693 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:52.693 "is_configured": true, 00:08:52.693 "data_offset": 2048, 00:08:52.693 "data_size": 63488 00:08:52.693 }, 00:08:52.693 { 00:08:52.693 "name": "BaseBdev3", 00:08:52.693 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:52.694 "is_configured": true, 00:08:52.694 "data_offset": 2048, 00:08:52.694 "data_size": 63488 00:08:52.694 } 00:08:52.694 ] 00:08:52.694 }' 00:08:52.694 17:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.694 17:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:53.263 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.263 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.264 [2024-10-25 17:50:11.405571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.264 "name": "Existed_Raid", 00:08:53.264 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:53.264 "strip_size_kb": 64, 00:08:53.264 "state": "configuring", 00:08:53.264 "raid_level": "concat", 00:08:53.264 "superblock": true, 00:08:53.264 "num_base_bdevs": 3, 00:08:53.264 "num_base_bdevs_discovered": 1, 00:08:53.264 "num_base_bdevs_operational": 3, 00:08:53.264 "base_bdevs_list": [ 00:08:53.264 { 00:08:53.264 "name": "BaseBdev1", 00:08:53.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.264 "is_configured": false, 00:08:53.264 "data_offset": 0, 00:08:53.264 "data_size": 0 00:08:53.264 }, 00:08:53.264 { 00:08:53.264 "name": null, 00:08:53.264 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:53.264 "is_configured": false, 00:08:53.264 "data_offset": 0, 00:08:53.264 "data_size": 63488 00:08:53.264 }, 00:08:53.264 { 00:08:53.264 "name": "BaseBdev3", 00:08:53.264 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:53.264 "is_configured": true, 00:08:53.264 "data_offset": 2048, 00:08:53.264 "data_size": 63488 00:08:53.264 } 00:08:53.264 ] 00:08:53.264 }' 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.264 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 [2024-10-25 17:50:11.921358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.524 BaseBdev1 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.524 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 [ 00:08:53.524 { 00:08:53.524 "name": "BaseBdev1", 00:08:53.524 "aliases": [ 00:08:53.524 "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0" 00:08:53.524 ], 00:08:53.524 "product_name": "Malloc disk", 00:08:53.524 "block_size": 512, 00:08:53.524 "num_blocks": 65536, 00:08:53.524 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:53.524 "assigned_rate_limits": { 00:08:53.524 "rw_ios_per_sec": 0, 00:08:53.524 "rw_mbytes_per_sec": 0, 00:08:53.525 "r_mbytes_per_sec": 0, 00:08:53.525 "w_mbytes_per_sec": 0 00:08:53.525 }, 00:08:53.525 "claimed": true, 00:08:53.525 "claim_type": "exclusive_write", 00:08:53.525 "zoned": false, 00:08:53.525 "supported_io_types": { 00:08:53.525 "read": true, 00:08:53.525 "write": true, 00:08:53.525 "unmap": true, 00:08:53.525 "flush": true, 00:08:53.525 "reset": true, 00:08:53.525 "nvme_admin": false, 00:08:53.525 "nvme_io": false, 00:08:53.525 "nvme_io_md": false, 00:08:53.525 "write_zeroes": true, 00:08:53.525 "zcopy": true, 00:08:53.525 "get_zone_info": false, 00:08:53.525 "zone_management": false, 00:08:53.525 "zone_append": false, 00:08:53.525 "compare": false, 00:08:53.525 "compare_and_write": false, 00:08:53.525 "abort": true, 00:08:53.525 "seek_hole": false, 00:08:53.525 "seek_data": false, 00:08:53.525 "copy": true, 00:08:53.525 "nvme_iov_md": false 00:08:53.525 }, 00:08:53.525 "memory_domains": [ 00:08:53.525 { 00:08:53.525 "dma_device_id": "system", 00:08:53.525 "dma_device_type": 1 00:08:53.525 }, 00:08:53.525 { 00:08:53.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.525 "dma_device_type": 2 00:08:53.525 } 00:08:53.525 ], 00:08:53.525 "driver_specific": {} 00:08:53.525 } 00:08:53.525 ] 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.525 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.785 17:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.785 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.785 "name": "Existed_Raid", 00:08:53.785 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:53.785 "strip_size_kb": 64, 00:08:53.785 "state": "configuring", 00:08:53.785 "raid_level": "concat", 00:08:53.785 "superblock": true, 00:08:53.785 "num_base_bdevs": 3, 00:08:53.785 "num_base_bdevs_discovered": 2, 00:08:53.785 "num_base_bdevs_operational": 3, 00:08:53.785 "base_bdevs_list": [ 00:08:53.785 { 00:08:53.785 "name": "BaseBdev1", 00:08:53.785 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:53.785 "is_configured": true, 00:08:53.785 "data_offset": 2048, 00:08:53.785 "data_size": 63488 00:08:53.785 }, 00:08:53.785 { 00:08:53.785 "name": null, 00:08:53.785 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:53.785 "is_configured": false, 00:08:53.785 "data_offset": 0, 00:08:53.785 "data_size": 63488 00:08:53.785 }, 00:08:53.785 { 00:08:53.785 "name": "BaseBdev3", 00:08:53.785 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:53.785 "is_configured": true, 00:08:53.785 "data_offset": 2048, 00:08:53.785 "data_size": 63488 00:08:53.785 } 00:08:53.785 ] 00:08:53.785 }' 00:08:53.785 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.785 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.045 [2024-10-25 17:50:12.460443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.045 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.304 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.304 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.304 "name": "Existed_Raid", 00:08:54.304 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:54.304 "strip_size_kb": 64, 00:08:54.304 "state": "configuring", 00:08:54.304 "raid_level": "concat", 00:08:54.305 "superblock": true, 00:08:54.305 "num_base_bdevs": 3, 00:08:54.305 "num_base_bdevs_discovered": 1, 00:08:54.305 "num_base_bdevs_operational": 3, 00:08:54.305 "base_bdevs_list": [ 00:08:54.305 { 00:08:54.305 "name": "BaseBdev1", 00:08:54.305 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:54.305 "is_configured": true, 00:08:54.305 "data_offset": 2048, 00:08:54.305 "data_size": 63488 00:08:54.305 }, 00:08:54.305 { 00:08:54.305 "name": null, 00:08:54.305 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:54.305 "is_configured": false, 00:08:54.305 "data_offset": 0, 00:08:54.305 "data_size": 63488 00:08:54.305 }, 00:08:54.305 { 00:08:54.305 "name": null, 00:08:54.305 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:54.305 "is_configured": false, 00:08:54.305 "data_offset": 0, 00:08:54.305 "data_size": 63488 00:08:54.305 } 00:08:54.305 ] 00:08:54.305 }' 00:08:54.305 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.305 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 [2024-10-25 17:50:12.944182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 17:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.822 17:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.822 "name": "Existed_Raid", 00:08:54.822 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:54.822 "strip_size_kb": 64, 00:08:54.822 "state": "configuring", 00:08:54.822 "raid_level": "concat", 00:08:54.822 "superblock": true, 00:08:54.822 "num_base_bdevs": 3, 00:08:54.822 "num_base_bdevs_discovered": 2, 00:08:54.822 "num_base_bdevs_operational": 3, 00:08:54.822 "base_bdevs_list": [ 00:08:54.822 { 00:08:54.822 "name": "BaseBdev1", 00:08:54.822 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:54.822 "is_configured": true, 00:08:54.822 "data_offset": 2048, 00:08:54.822 "data_size": 63488 00:08:54.822 }, 00:08:54.822 { 00:08:54.822 "name": null, 00:08:54.822 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:54.822 "is_configured": false, 00:08:54.822 "data_offset": 0, 00:08:54.822 "data_size": 63488 00:08:54.822 }, 00:08:54.822 { 00:08:54.822 "name": "BaseBdev3", 00:08:54.822 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:54.822 "is_configured": true, 00:08:54.822 "data_offset": 2048, 00:08:54.822 "data_size": 63488 00:08:54.822 } 00:08:54.822 ] 00:08:54.822 }' 00:08:54.822 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.822 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.082 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 [2024-10-25 17:50:13.427364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.342 "name": "Existed_Raid", 00:08:55.342 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:55.342 "strip_size_kb": 64, 00:08:55.342 "state": "configuring", 00:08:55.342 "raid_level": "concat", 00:08:55.342 "superblock": true, 00:08:55.342 "num_base_bdevs": 3, 00:08:55.342 "num_base_bdevs_discovered": 1, 00:08:55.342 "num_base_bdevs_operational": 3, 00:08:55.342 "base_bdevs_list": [ 00:08:55.342 { 00:08:55.342 "name": null, 00:08:55.342 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:55.342 "is_configured": false, 00:08:55.342 "data_offset": 0, 00:08:55.342 "data_size": 63488 00:08:55.342 }, 00:08:55.342 { 00:08:55.342 "name": null, 00:08:55.342 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:55.342 "is_configured": false, 00:08:55.342 "data_offset": 0, 00:08:55.342 "data_size": 63488 00:08:55.342 }, 00:08:55.342 { 00:08:55.342 "name": "BaseBdev3", 00:08:55.342 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:55.342 "is_configured": true, 00:08:55.342 "data_offset": 2048, 00:08:55.342 "data_size": 63488 00:08:55.342 } 00:08:55.342 ] 00:08:55.342 }' 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.342 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.602 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.602 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.602 17:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:55.602 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.602 17:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.602 [2024-10-25 17:50:14.026597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.602 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.862 "name": "Existed_Raid", 00:08:55.862 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:55.862 "strip_size_kb": 64, 00:08:55.862 "state": "configuring", 00:08:55.862 "raid_level": "concat", 00:08:55.862 "superblock": true, 00:08:55.862 "num_base_bdevs": 3, 00:08:55.862 "num_base_bdevs_discovered": 2, 00:08:55.862 "num_base_bdevs_operational": 3, 00:08:55.862 "base_bdevs_list": [ 00:08:55.862 { 00:08:55.862 "name": null, 00:08:55.862 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:55.862 "is_configured": false, 00:08:55.862 "data_offset": 0, 00:08:55.862 "data_size": 63488 00:08:55.862 }, 00:08:55.862 { 00:08:55.862 "name": "BaseBdev2", 00:08:55.862 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:55.862 "is_configured": true, 00:08:55.862 "data_offset": 2048, 00:08:55.862 "data_size": 63488 00:08:55.862 }, 00:08:55.862 { 00:08:55.862 "name": "BaseBdev3", 00:08:55.862 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:55.862 "is_configured": true, 00:08:55.862 "data_offset": 2048, 00:08:55.862 "data_size": 63488 00:08:55.862 } 00:08:55.862 ] 00:08:55.862 }' 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.862 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.122 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.122 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.122 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.383 [2024-10-25 17:50:14.641445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:56.383 [2024-10-25 17:50:14.641747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:56.383 NewBaseBdev 00:08:56.383 [2024-10-25 17:50:14.641798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.383 [2024-10-25 17:50:14.642092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:56.383 [2024-10-25 17:50:14.642242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:56.383 [2024-10-25 17:50:14.642252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:56.383 [2024-10-25 17:50:14.642385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.383 [ 00:08:56.383 { 00:08:56.383 "name": "NewBaseBdev", 00:08:56.383 "aliases": [ 00:08:56.383 "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0" 00:08:56.383 ], 00:08:56.383 "product_name": "Malloc disk", 00:08:56.383 "block_size": 512, 00:08:56.383 "num_blocks": 65536, 00:08:56.383 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:56.383 "assigned_rate_limits": { 00:08:56.383 "rw_ios_per_sec": 0, 00:08:56.383 "rw_mbytes_per_sec": 0, 00:08:56.383 "r_mbytes_per_sec": 0, 00:08:56.383 "w_mbytes_per_sec": 0 00:08:56.383 }, 00:08:56.383 "claimed": true, 00:08:56.383 "claim_type": "exclusive_write", 00:08:56.383 "zoned": false, 00:08:56.383 "supported_io_types": { 00:08:56.383 "read": true, 00:08:56.383 "write": true, 00:08:56.383 "unmap": true, 00:08:56.383 "flush": true, 00:08:56.383 "reset": true, 00:08:56.383 "nvme_admin": false, 00:08:56.383 "nvme_io": false, 00:08:56.383 "nvme_io_md": false, 00:08:56.383 "write_zeroes": true, 00:08:56.383 "zcopy": true, 00:08:56.383 "get_zone_info": false, 00:08:56.383 "zone_management": false, 00:08:56.383 "zone_append": false, 00:08:56.383 "compare": false, 00:08:56.383 "compare_and_write": false, 00:08:56.383 "abort": true, 00:08:56.383 "seek_hole": false, 00:08:56.383 "seek_data": false, 00:08:56.383 "copy": true, 00:08:56.383 "nvme_iov_md": false 00:08:56.383 }, 00:08:56.383 "memory_domains": [ 00:08:56.383 { 00:08:56.383 "dma_device_id": "system", 00:08:56.383 "dma_device_type": 1 00:08:56.383 }, 00:08:56.383 { 00:08:56.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.383 "dma_device_type": 2 00:08:56.383 } 00:08:56.383 ], 00:08:56.383 "driver_specific": {} 00:08:56.383 } 00:08:56.383 ] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.383 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.384 "name": "Existed_Raid", 00:08:56.384 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:56.384 "strip_size_kb": 64, 00:08:56.384 "state": "online", 00:08:56.384 "raid_level": "concat", 00:08:56.384 "superblock": true, 00:08:56.384 "num_base_bdevs": 3, 00:08:56.384 "num_base_bdevs_discovered": 3, 00:08:56.384 "num_base_bdevs_operational": 3, 00:08:56.384 "base_bdevs_list": [ 00:08:56.384 { 00:08:56.384 "name": "NewBaseBdev", 00:08:56.384 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:56.384 "is_configured": true, 00:08:56.384 "data_offset": 2048, 00:08:56.384 "data_size": 63488 00:08:56.384 }, 00:08:56.384 { 00:08:56.384 "name": "BaseBdev2", 00:08:56.384 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:56.384 "is_configured": true, 00:08:56.384 "data_offset": 2048, 00:08:56.384 "data_size": 63488 00:08:56.384 }, 00:08:56.384 { 00:08:56.384 "name": "BaseBdev3", 00:08:56.384 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:56.384 "is_configured": true, 00:08:56.384 "data_offset": 2048, 00:08:56.384 "data_size": 63488 00:08:56.384 } 00:08:56.384 ] 00:08:56.384 }' 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.384 17:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.953 [2024-10-25 17:50:15.120978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.953 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.953 "name": "Existed_Raid", 00:08:56.953 "aliases": [ 00:08:56.953 "ba27ee43-bd7b-4f66-9d20-3974269e85f4" 00:08:56.953 ], 00:08:56.953 "product_name": "Raid Volume", 00:08:56.953 "block_size": 512, 00:08:56.953 "num_blocks": 190464, 00:08:56.953 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:56.953 "assigned_rate_limits": { 00:08:56.953 "rw_ios_per_sec": 0, 00:08:56.953 "rw_mbytes_per_sec": 0, 00:08:56.953 "r_mbytes_per_sec": 0, 00:08:56.953 "w_mbytes_per_sec": 0 00:08:56.953 }, 00:08:56.953 "claimed": false, 00:08:56.953 "zoned": false, 00:08:56.953 "supported_io_types": { 00:08:56.953 "read": true, 00:08:56.953 "write": true, 00:08:56.953 "unmap": true, 00:08:56.953 "flush": true, 00:08:56.953 "reset": true, 00:08:56.953 "nvme_admin": false, 00:08:56.953 "nvme_io": false, 00:08:56.953 "nvme_io_md": false, 00:08:56.953 "write_zeroes": true, 00:08:56.953 "zcopy": false, 00:08:56.953 "get_zone_info": false, 00:08:56.953 "zone_management": false, 00:08:56.953 "zone_append": false, 00:08:56.953 "compare": false, 00:08:56.953 "compare_and_write": false, 00:08:56.953 "abort": false, 00:08:56.953 "seek_hole": false, 00:08:56.953 "seek_data": false, 00:08:56.953 "copy": false, 00:08:56.953 "nvme_iov_md": false 00:08:56.953 }, 00:08:56.953 "memory_domains": [ 00:08:56.953 { 00:08:56.953 "dma_device_id": "system", 00:08:56.953 "dma_device_type": 1 00:08:56.953 }, 00:08:56.953 { 00:08:56.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.953 "dma_device_type": 2 00:08:56.953 }, 00:08:56.953 { 00:08:56.953 "dma_device_id": "system", 00:08:56.953 "dma_device_type": 1 00:08:56.953 }, 00:08:56.953 { 00:08:56.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.953 "dma_device_type": 2 00:08:56.953 }, 00:08:56.953 { 00:08:56.953 "dma_device_id": "system", 00:08:56.953 "dma_device_type": 1 00:08:56.953 }, 00:08:56.953 { 00:08:56.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.953 "dma_device_type": 2 00:08:56.953 } 00:08:56.953 ], 00:08:56.953 "driver_specific": { 00:08:56.953 "raid": { 00:08:56.954 "uuid": "ba27ee43-bd7b-4f66-9d20-3974269e85f4", 00:08:56.954 "strip_size_kb": 64, 00:08:56.954 "state": "online", 00:08:56.954 "raid_level": "concat", 00:08:56.954 "superblock": true, 00:08:56.954 "num_base_bdevs": 3, 00:08:56.954 "num_base_bdevs_discovered": 3, 00:08:56.954 "num_base_bdevs_operational": 3, 00:08:56.954 "base_bdevs_list": [ 00:08:56.954 { 00:08:56.954 "name": "NewBaseBdev", 00:08:56.954 "uuid": "0ae68a42-7c50-4cf6-90a2-ba626a9fb0a0", 00:08:56.954 "is_configured": true, 00:08:56.954 "data_offset": 2048, 00:08:56.954 "data_size": 63488 00:08:56.954 }, 00:08:56.954 { 00:08:56.954 "name": "BaseBdev2", 00:08:56.954 "uuid": "309ef86f-b9e0-4916-803c-0d35b1c286c9", 00:08:56.954 "is_configured": true, 00:08:56.954 "data_offset": 2048, 00:08:56.954 "data_size": 63488 00:08:56.954 }, 00:08:56.954 { 00:08:56.954 "name": "BaseBdev3", 00:08:56.954 "uuid": "6f13147b-0e82-48f2-a894-8fee0181a173", 00:08:56.954 "is_configured": true, 00:08:56.954 "data_offset": 2048, 00:08:56.954 "data_size": 63488 00:08:56.954 } 00:08:56.954 ] 00:08:56.954 } 00:08:56.954 } 00:08:56.954 }' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:56.954 BaseBdev2 00:08:56.954 BaseBdev3' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.954 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.214 [2024-10-25 17:50:15.396186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.214 [2024-10-25 17:50:15.396213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.214 [2024-10-25 17:50:15.396286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.214 [2024-10-25 17:50:15.396343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.214 [2024-10-25 17:50:15.396355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66002 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66002 ']' 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66002 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66002 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.214 killing process with pid 66002 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66002' 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66002 00:08:57.214 [2024-10-25 17:50:15.447168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.214 17:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66002 00:08:57.474 [2024-10-25 17:50:15.743014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.413 17:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.413 00:08:58.413 real 0m10.422s 00:08:58.413 user 0m16.576s 00:08:58.413 sys 0m1.929s 00:08:58.413 17:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.413 ************************************ 00:08:58.413 END TEST raid_state_function_test_sb 00:08:58.413 ************************************ 00:08:58.413 17:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.413 17:50:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:58.413 17:50:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:58.413 17:50:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.413 17:50:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.673 ************************************ 00:08:58.673 START TEST raid_superblock_test 00:08:58.673 ************************************ 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66621 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66621 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66621 ']' 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.673 17:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.673 [2024-10-25 17:50:16.960314] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:58.673 [2024-10-25 17:50:16.960446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66621 ] 00:08:58.933 [2024-10-25 17:50:17.141123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.933 [2024-10-25 17:50:17.246959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.193 [2024-10-25 17:50:17.440698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.193 [2024-10-25 17:50:17.440752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.452 malloc1 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.452 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.452 [2024-10-25 17:50:17.804856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.452 [2024-10-25 17:50:17.804959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.452 [2024-10-25 17:50:17.805002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:59.453 [2024-10-25 17:50:17.805030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.453 [2024-10-25 17:50:17.806952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.453 [2024-10-25 17:50:17.807022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.453 pt1 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.453 malloc2 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.453 [2024-10-25 17:50:17.861009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.453 [2024-10-25 17:50:17.861058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.453 [2024-10-25 17:50:17.861079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:59.453 [2024-10-25 17:50:17.861088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.453 [2024-10-25 17:50:17.862966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.453 [2024-10-25 17:50:17.863001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.453 pt2 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.453 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 malloc3 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 [2024-10-25 17:50:17.953752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.713 [2024-10-25 17:50:17.953873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.713 [2024-10-25 17:50:17.953918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:59.713 [2024-10-25 17:50:17.953955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.713 [2024-10-25 17:50:17.955998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.713 [2024-10-25 17:50:17.956075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.713 pt3 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 [2024-10-25 17:50:17.969773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.713 [2024-10-25 17:50:17.971488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.713 [2024-10-25 17:50:17.971542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.713 [2024-10-25 17:50:17.971685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:59.713 [2024-10-25 17:50:17.971699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.713 [2024-10-25 17:50:17.971977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.713 [2024-10-25 17:50:17.972182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:59.713 [2024-10-25 17:50:17.972228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:59.713 [2024-10-25 17:50:17.972418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.713 17:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.713 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.713 "name": "raid_bdev1", 00:08:59.713 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:08:59.713 "strip_size_kb": 64, 00:08:59.713 "state": "online", 00:08:59.713 "raid_level": "concat", 00:08:59.713 "superblock": true, 00:08:59.713 "num_base_bdevs": 3, 00:08:59.713 "num_base_bdevs_discovered": 3, 00:08:59.714 "num_base_bdevs_operational": 3, 00:08:59.714 "base_bdevs_list": [ 00:08:59.714 { 00:08:59.714 "name": "pt1", 00:08:59.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.714 "is_configured": true, 00:08:59.714 "data_offset": 2048, 00:08:59.714 "data_size": 63488 00:08:59.714 }, 00:08:59.714 { 00:08:59.714 "name": "pt2", 00:08:59.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.714 "is_configured": true, 00:08:59.714 "data_offset": 2048, 00:08:59.714 "data_size": 63488 00:08:59.714 }, 00:08:59.714 { 00:08:59.714 "name": "pt3", 00:08:59.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.714 "is_configured": true, 00:08:59.714 "data_offset": 2048, 00:08:59.714 "data_size": 63488 00:08:59.714 } 00:08:59.714 ] 00:08:59.714 }' 00:08:59.714 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.714 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.284 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.284 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.284 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.285 [2024-10-25 17:50:18.469224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.285 "name": "raid_bdev1", 00:09:00.285 "aliases": [ 00:09:00.285 "75405f10-f471-4a4b-92c0-4793e3fef477" 00:09:00.285 ], 00:09:00.285 "product_name": "Raid Volume", 00:09:00.285 "block_size": 512, 00:09:00.285 "num_blocks": 190464, 00:09:00.285 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:00.285 "assigned_rate_limits": { 00:09:00.285 "rw_ios_per_sec": 0, 00:09:00.285 "rw_mbytes_per_sec": 0, 00:09:00.285 "r_mbytes_per_sec": 0, 00:09:00.285 "w_mbytes_per_sec": 0 00:09:00.285 }, 00:09:00.285 "claimed": false, 00:09:00.285 "zoned": false, 00:09:00.285 "supported_io_types": { 00:09:00.285 "read": true, 00:09:00.285 "write": true, 00:09:00.285 "unmap": true, 00:09:00.285 "flush": true, 00:09:00.285 "reset": true, 00:09:00.285 "nvme_admin": false, 00:09:00.285 "nvme_io": false, 00:09:00.285 "nvme_io_md": false, 00:09:00.285 "write_zeroes": true, 00:09:00.285 "zcopy": false, 00:09:00.285 "get_zone_info": false, 00:09:00.285 "zone_management": false, 00:09:00.285 "zone_append": false, 00:09:00.285 "compare": false, 00:09:00.285 "compare_and_write": false, 00:09:00.285 "abort": false, 00:09:00.285 "seek_hole": false, 00:09:00.285 "seek_data": false, 00:09:00.285 "copy": false, 00:09:00.285 "nvme_iov_md": false 00:09:00.285 }, 00:09:00.285 "memory_domains": [ 00:09:00.285 { 00:09:00.285 "dma_device_id": "system", 00:09:00.285 "dma_device_type": 1 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.285 "dma_device_type": 2 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "dma_device_id": "system", 00:09:00.285 "dma_device_type": 1 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.285 "dma_device_type": 2 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "dma_device_id": "system", 00:09:00.285 "dma_device_type": 1 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.285 "dma_device_type": 2 00:09:00.285 } 00:09:00.285 ], 00:09:00.285 "driver_specific": { 00:09:00.285 "raid": { 00:09:00.285 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:00.285 "strip_size_kb": 64, 00:09:00.285 "state": "online", 00:09:00.285 "raid_level": "concat", 00:09:00.285 "superblock": true, 00:09:00.285 "num_base_bdevs": 3, 00:09:00.285 "num_base_bdevs_discovered": 3, 00:09:00.285 "num_base_bdevs_operational": 3, 00:09:00.285 "base_bdevs_list": [ 00:09:00.285 { 00:09:00.285 "name": "pt1", 00:09:00.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.285 "is_configured": true, 00:09:00.285 "data_offset": 2048, 00:09:00.285 "data_size": 63488 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "name": "pt2", 00:09:00.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.285 "is_configured": true, 00:09:00.285 "data_offset": 2048, 00:09:00.285 "data_size": 63488 00:09:00.285 }, 00:09:00.285 { 00:09:00.285 "name": "pt3", 00:09:00.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.285 "is_configured": true, 00:09:00.285 "data_offset": 2048, 00:09:00.285 "data_size": 63488 00:09:00.285 } 00:09:00.285 ] 00:09:00.285 } 00:09:00.285 } 00:09:00.285 }' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.285 pt2 00:09:00.285 pt3' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.285 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.545 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.545 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 [2024-10-25 17:50:18.756619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=75405f10-f471-4a4b-92c0-4793e3fef477 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 75405f10-f471-4a4b-92c0-4793e3fef477 ']' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 [2024-10-25 17:50:18.800299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.546 [2024-10-25 17:50:18.800367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.546 [2024-10-25 17:50:18.800457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.546 [2024-10-25 17:50:18.800536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.546 [2024-10-25 17:50:18.800584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 [2024-10-25 17:50:18.956218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:00.546 [2024-10-25 17:50:18.957965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:00.546 [2024-10-25 17:50:18.958066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:00.546 [2024-10-25 17:50:18.958118] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:00.546 [2024-10-25 17:50:18.958164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:00.546 [2024-10-25 17:50:18.958181] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:00.546 [2024-10-25 17:50:18.958197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.546 [2024-10-25 17:50:18.958206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:00.546 request: 00:09:00.546 { 00:09:00.546 "name": "raid_bdev1", 00:09:00.546 "raid_level": "concat", 00:09:00.546 "base_bdevs": [ 00:09:00.546 "malloc1", 00:09:00.546 "malloc2", 00:09:00.546 "malloc3" 00:09:00.546 ], 00:09:00.546 "strip_size_kb": 64, 00:09:00.546 "superblock": false, 00:09:00.546 "method": "bdev_raid_create", 00:09:00.546 "req_id": 1 00:09:00.546 } 00:09:00.546 Got JSON-RPC error response 00:09:00.546 response: 00:09:00.546 { 00:09:00.546 "code": -17, 00:09:00.546 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:00.546 } 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.546 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 17:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.807 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:00.807 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:00.807 17:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 [2024-10-25 17:50:19.008177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.807 [2024-10-25 17:50:19.008261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.807 [2024-10-25 17:50:19.008294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:00.807 [2024-10-25 17:50:19.008322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.807 [2024-10-25 17:50:19.010402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.807 [2024-10-25 17:50:19.010472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.807 [2024-10-25 17:50:19.010559] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:00.807 [2024-10-25 17:50:19.010630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.807 pt1 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.807 "name": "raid_bdev1", 00:09:00.807 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:00.807 "strip_size_kb": 64, 00:09:00.807 "state": "configuring", 00:09:00.807 "raid_level": "concat", 00:09:00.807 "superblock": true, 00:09:00.807 "num_base_bdevs": 3, 00:09:00.807 "num_base_bdevs_discovered": 1, 00:09:00.807 "num_base_bdevs_operational": 3, 00:09:00.807 "base_bdevs_list": [ 00:09:00.807 { 00:09:00.807 "name": "pt1", 00:09:00.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.807 "is_configured": true, 00:09:00.807 "data_offset": 2048, 00:09:00.807 "data_size": 63488 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "name": null, 00:09:00.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.807 "is_configured": false, 00:09:00.807 "data_offset": 2048, 00:09:00.807 "data_size": 63488 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "name": null, 00:09:00.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.807 "is_configured": false, 00:09:00.807 "data_offset": 2048, 00:09:00.807 "data_size": 63488 00:09:00.807 } 00:09:00.807 ] 00:09:00.807 }' 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.807 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 [2024-10-25 17:50:19.479419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.068 [2024-10-25 17:50:19.479492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.068 [2024-10-25 17:50:19.479515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:01.068 [2024-10-25 17:50:19.479525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.068 [2024-10-25 17:50:19.479978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.068 [2024-10-25 17:50:19.479998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.068 [2024-10-25 17:50:19.480091] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.068 [2024-10-25 17:50:19.480114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.068 pt2 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 [2024-10-25 17:50:19.491394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.068 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.328 "name": "raid_bdev1", 00:09:01.328 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:01.328 "strip_size_kb": 64, 00:09:01.328 "state": "configuring", 00:09:01.328 "raid_level": "concat", 00:09:01.328 "superblock": true, 00:09:01.328 "num_base_bdevs": 3, 00:09:01.328 "num_base_bdevs_discovered": 1, 00:09:01.328 "num_base_bdevs_operational": 3, 00:09:01.328 "base_bdevs_list": [ 00:09:01.328 { 00:09:01.328 "name": "pt1", 00:09:01.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.328 "is_configured": true, 00:09:01.328 "data_offset": 2048, 00:09:01.328 "data_size": 63488 00:09:01.328 }, 00:09:01.328 { 00:09:01.328 "name": null, 00:09:01.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.328 "is_configured": false, 00:09:01.328 "data_offset": 0, 00:09:01.328 "data_size": 63488 00:09:01.328 }, 00:09:01.328 { 00:09:01.328 "name": null, 00:09:01.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.328 "is_configured": false, 00:09:01.328 "data_offset": 2048, 00:09:01.328 "data_size": 63488 00:09:01.328 } 00:09:01.328 ] 00:09:01.328 }' 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.328 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.589 [2024-10-25 17:50:19.886686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.589 [2024-10-25 17:50:19.886792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.589 [2024-10-25 17:50:19.886838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:01.589 [2024-10-25 17:50:19.886870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.589 [2024-10-25 17:50:19.887337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.589 [2024-10-25 17:50:19.887403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.589 [2024-10-25 17:50:19.887505] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.589 [2024-10-25 17:50:19.887557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.589 pt2 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.589 [2024-10-25 17:50:19.898660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:01.589 [2024-10-25 17:50:19.898745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.589 [2024-10-25 17:50:19.898761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:01.589 [2024-10-25 17:50:19.898771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.589 [2024-10-25 17:50:19.899148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.589 [2024-10-25 17:50:19.899179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:01.589 [2024-10-25 17:50:19.899240] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:01.589 [2024-10-25 17:50:19.899260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:01.589 [2024-10-25 17:50:19.899383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.589 [2024-10-25 17:50:19.899404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.589 [2024-10-25 17:50:19.899630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.589 [2024-10-25 17:50:19.899764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.589 [2024-10-25 17:50:19.899772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:01.589 [2024-10-25 17:50:19.899921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.589 pt3 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.589 "name": "raid_bdev1", 00:09:01.589 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:01.589 "strip_size_kb": 64, 00:09:01.589 "state": "online", 00:09:01.589 "raid_level": "concat", 00:09:01.589 "superblock": true, 00:09:01.589 "num_base_bdevs": 3, 00:09:01.589 "num_base_bdevs_discovered": 3, 00:09:01.589 "num_base_bdevs_operational": 3, 00:09:01.589 "base_bdevs_list": [ 00:09:01.589 { 00:09:01.589 "name": "pt1", 00:09:01.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.589 "is_configured": true, 00:09:01.589 "data_offset": 2048, 00:09:01.589 "data_size": 63488 00:09:01.589 }, 00:09:01.589 { 00:09:01.589 "name": "pt2", 00:09:01.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.589 "is_configured": true, 00:09:01.589 "data_offset": 2048, 00:09:01.589 "data_size": 63488 00:09:01.589 }, 00:09:01.589 { 00:09:01.589 "name": "pt3", 00:09:01.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.589 "is_configured": true, 00:09:01.589 "data_offset": 2048, 00:09:01.589 "data_size": 63488 00:09:01.589 } 00:09:01.589 ] 00:09:01.589 }' 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.589 17:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.159 [2024-10-25 17:50:20.330205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.159 "name": "raid_bdev1", 00:09:02.159 "aliases": [ 00:09:02.159 "75405f10-f471-4a4b-92c0-4793e3fef477" 00:09:02.159 ], 00:09:02.159 "product_name": "Raid Volume", 00:09:02.159 "block_size": 512, 00:09:02.159 "num_blocks": 190464, 00:09:02.159 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:02.159 "assigned_rate_limits": { 00:09:02.159 "rw_ios_per_sec": 0, 00:09:02.159 "rw_mbytes_per_sec": 0, 00:09:02.159 "r_mbytes_per_sec": 0, 00:09:02.159 "w_mbytes_per_sec": 0 00:09:02.159 }, 00:09:02.159 "claimed": false, 00:09:02.159 "zoned": false, 00:09:02.159 "supported_io_types": { 00:09:02.159 "read": true, 00:09:02.159 "write": true, 00:09:02.159 "unmap": true, 00:09:02.159 "flush": true, 00:09:02.159 "reset": true, 00:09:02.159 "nvme_admin": false, 00:09:02.159 "nvme_io": false, 00:09:02.159 "nvme_io_md": false, 00:09:02.159 "write_zeroes": true, 00:09:02.159 "zcopy": false, 00:09:02.159 "get_zone_info": false, 00:09:02.159 "zone_management": false, 00:09:02.159 "zone_append": false, 00:09:02.159 "compare": false, 00:09:02.159 "compare_and_write": false, 00:09:02.159 "abort": false, 00:09:02.159 "seek_hole": false, 00:09:02.159 "seek_data": false, 00:09:02.159 "copy": false, 00:09:02.159 "nvme_iov_md": false 00:09:02.159 }, 00:09:02.159 "memory_domains": [ 00:09:02.159 { 00:09:02.159 "dma_device_id": "system", 00:09:02.159 "dma_device_type": 1 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.159 "dma_device_type": 2 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "dma_device_id": "system", 00:09:02.159 "dma_device_type": 1 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.159 "dma_device_type": 2 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "dma_device_id": "system", 00:09:02.159 "dma_device_type": 1 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.159 "dma_device_type": 2 00:09:02.159 } 00:09:02.159 ], 00:09:02.159 "driver_specific": { 00:09:02.159 "raid": { 00:09:02.159 "uuid": "75405f10-f471-4a4b-92c0-4793e3fef477", 00:09:02.159 "strip_size_kb": 64, 00:09:02.159 "state": "online", 00:09:02.159 "raid_level": "concat", 00:09:02.159 "superblock": true, 00:09:02.159 "num_base_bdevs": 3, 00:09:02.159 "num_base_bdevs_discovered": 3, 00:09:02.159 "num_base_bdevs_operational": 3, 00:09:02.159 "base_bdevs_list": [ 00:09:02.159 { 00:09:02.159 "name": "pt1", 00:09:02.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.159 "is_configured": true, 00:09:02.159 "data_offset": 2048, 00:09:02.159 "data_size": 63488 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "name": "pt2", 00:09:02.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.159 "is_configured": true, 00:09:02.159 "data_offset": 2048, 00:09:02.159 "data_size": 63488 00:09:02.159 }, 00:09:02.159 { 00:09:02.159 "name": "pt3", 00:09:02.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.159 "is_configured": true, 00:09:02.159 "data_offset": 2048, 00:09:02.159 "data_size": 63488 00:09:02.159 } 00:09:02.159 ] 00:09:02.159 } 00:09:02.159 } 00:09:02.159 }' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:02.159 pt2 00:09:02.159 pt3' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.159 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.420 [2024-10-25 17:50:20.609650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 75405f10-f471-4a4b-92c0-4793e3fef477 '!=' 75405f10-f471-4a4b-92c0-4793e3fef477 ']' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66621 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66621 ']' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66621 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66621 00:09:02.420 killing process with pid 66621 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66621' 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66621 00:09:02.420 [2024-10-25 17:50:20.672896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.420 [2024-10-25 17:50:20.672974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.420 [2024-10-25 17:50:20.673031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.420 [2024-10-25 17:50:20.673041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:02.420 17:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66621 00:09:02.679 [2024-10-25 17:50:20.954458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.618 ************************************ 00:09:03.618 END TEST raid_superblock_test 00:09:03.618 ************************************ 00:09:03.618 17:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:03.618 00:09:03.618 real 0m5.173s 00:09:03.618 user 0m7.392s 00:09:03.618 sys 0m0.943s 00:09:03.618 17:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.618 17:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.879 17:50:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:03.879 17:50:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.879 17:50:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.879 17:50:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.879 ************************************ 00:09:03.879 START TEST raid_read_error_test 00:09:03.879 ************************************ 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WRIQKbCZdk 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66876 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66876 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 66876 ']' 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.879 17:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.879 [2024-10-25 17:50:22.218690] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:03.879 [2024-10-25 17:50:22.218819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66876 ] 00:09:04.139 [2024-10-25 17:50:22.383104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.139 [2024-10-25 17:50:22.494974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.400 [2024-10-25 17:50:22.681996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.400 [2024-10-25 17:50:22.682054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.660 BaseBdev1_malloc 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.660 true 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.660 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.660 [2024-10-25 17:50:23.089651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:04.660 [2024-10-25 17:50:23.089708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.660 [2024-10-25 17:50:23.089726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:04.660 [2024-10-25 17:50:23.089738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.660 [2024-10-25 17:50:23.091750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.660 [2024-10-25 17:50:23.091843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:04.920 BaseBdev1 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.920 BaseBdev2_malloc 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.920 true 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.920 [2024-10-25 17:50:23.153927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.920 [2024-10-25 17:50:23.154022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.920 [2024-10-25 17:50:23.154041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.920 [2024-10-25 17:50:23.154052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.920 [2024-10-25 17:50:23.156056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.920 [2024-10-25 17:50:23.156108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.920 BaseBdev2 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.920 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 BaseBdev3_malloc 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 true 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 [2024-10-25 17:50:23.255468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.921 [2024-10-25 17:50:23.255516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.921 [2024-10-25 17:50:23.255531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.921 [2024-10-25 17:50:23.255541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.921 [2024-10-25 17:50:23.257551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.921 [2024-10-25 17:50:23.257591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.921 BaseBdev3 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 [2024-10-25 17:50:23.267526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.921 [2024-10-25 17:50:23.269325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.921 [2024-10-25 17:50:23.269460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.921 [2024-10-25 17:50:23.269676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.921 [2024-10-25 17:50:23.269723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.921 [2024-10-25 17:50:23.269985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:04.921 [2024-10-25 17:50:23.270176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.921 [2024-10-25 17:50:23.270220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:04.921 [2024-10-25 17:50:23.270402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.921 "name": "raid_bdev1", 00:09:04.921 "uuid": "ececf705-a703-4521-a5dc-9fa360cd8b3a", 00:09:04.921 "strip_size_kb": 64, 00:09:04.921 "state": "online", 00:09:04.921 "raid_level": "concat", 00:09:04.921 "superblock": true, 00:09:04.921 "num_base_bdevs": 3, 00:09:04.921 "num_base_bdevs_discovered": 3, 00:09:04.921 "num_base_bdevs_operational": 3, 00:09:04.921 "base_bdevs_list": [ 00:09:04.921 { 00:09:04.921 "name": "BaseBdev1", 00:09:04.921 "uuid": "763b01e6-9228-5d99-bed6-15dc06a330fd", 00:09:04.921 "is_configured": true, 00:09:04.921 "data_offset": 2048, 00:09:04.921 "data_size": 63488 00:09:04.921 }, 00:09:04.921 { 00:09:04.921 "name": "BaseBdev2", 00:09:04.921 "uuid": "ad9838f0-cd0e-58f0-ab5f-52d5c3e62591", 00:09:04.921 "is_configured": true, 00:09:04.921 "data_offset": 2048, 00:09:04.921 "data_size": 63488 00:09:04.921 }, 00:09:04.921 { 00:09:04.921 "name": "BaseBdev3", 00:09:04.921 "uuid": "b2710407-5372-5895-918b-96c31c5fe92d", 00:09:04.921 "is_configured": true, 00:09:04.921 "data_offset": 2048, 00:09:04.921 "data_size": 63488 00:09:04.921 } 00:09:04.921 ] 00:09:04.921 }' 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.921 17:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.490 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.490 17:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.490 [2024-10-25 17:50:23.803975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.428 "name": "raid_bdev1", 00:09:06.428 "uuid": "ececf705-a703-4521-a5dc-9fa360cd8b3a", 00:09:06.428 "strip_size_kb": 64, 00:09:06.428 "state": "online", 00:09:06.428 "raid_level": "concat", 00:09:06.428 "superblock": true, 00:09:06.428 "num_base_bdevs": 3, 00:09:06.428 "num_base_bdevs_discovered": 3, 00:09:06.428 "num_base_bdevs_operational": 3, 00:09:06.428 "base_bdevs_list": [ 00:09:06.428 { 00:09:06.428 "name": "BaseBdev1", 00:09:06.428 "uuid": "763b01e6-9228-5d99-bed6-15dc06a330fd", 00:09:06.428 "is_configured": true, 00:09:06.428 "data_offset": 2048, 00:09:06.428 "data_size": 63488 00:09:06.428 }, 00:09:06.428 { 00:09:06.428 "name": "BaseBdev2", 00:09:06.428 "uuid": "ad9838f0-cd0e-58f0-ab5f-52d5c3e62591", 00:09:06.428 "is_configured": true, 00:09:06.428 "data_offset": 2048, 00:09:06.428 "data_size": 63488 00:09:06.428 }, 00:09:06.428 { 00:09:06.428 "name": "BaseBdev3", 00:09:06.428 "uuid": "b2710407-5372-5895-918b-96c31c5fe92d", 00:09:06.428 "is_configured": true, 00:09:06.428 "data_offset": 2048, 00:09:06.428 "data_size": 63488 00:09:06.428 } 00:09:06.428 ] 00:09:06.428 }' 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.428 17:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.996 [2024-10-25 17:50:25.181909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.996 [2024-10-25 17:50:25.182009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.996 [2024-10-25 17:50:25.184488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.996 [2024-10-25 17:50:25.184572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.996 [2024-10-25 17:50:25.184625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.996 [2024-10-25 17:50:25.184682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.996 { 00:09:06.996 "results": [ 00:09:06.996 { 00:09:06.996 "job": "raid_bdev1", 00:09:06.996 "core_mask": "0x1", 00:09:06.996 "workload": "randrw", 00:09:06.996 "percentage": 50, 00:09:06.996 "status": "finished", 00:09:06.996 "queue_depth": 1, 00:09:06.996 "io_size": 131072, 00:09:06.996 "runtime": 1.378922, 00:09:06.996 "iops": 17143.826844448053, 00:09:06.996 "mibps": 2142.9783555560066, 00:09:06.996 "io_failed": 1, 00:09:06.996 "io_timeout": 0, 00:09:06.996 "avg_latency_us": 81.10275505750225, 00:09:06.996 "min_latency_us": 24.482096069868994, 00:09:06.996 "max_latency_us": 1273.5161572052402 00:09:06.996 } 00:09:06.996 ], 00:09:06.996 "core_count": 1 00:09:06.996 } 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66876 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 66876 ']' 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 66876 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66876 00:09:06.996 killing process with pid 66876 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66876' 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 66876 00:09:06.996 [2024-10-25 17:50:25.229025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.996 17:50:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 66876 00:09:07.254 [2024-10-25 17:50:25.444285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WRIQKbCZdk 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.192 ************************************ 00:09:08.192 END TEST raid_read_error_test 00:09:08.192 ************************************ 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:08.192 00:09:08.192 real 0m4.435s 00:09:08.192 user 0m5.269s 00:09:08.192 sys 0m0.565s 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.192 17:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.192 17:50:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:08.192 17:50:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.192 17:50:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.192 17:50:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.192 ************************************ 00:09:08.192 START TEST raid_write_error_test 00:09:08.192 ************************************ 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.192 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.193 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tOTKC0QRPE 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67016 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67016 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67016 ']' 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.452 17:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.452 [2024-10-25 17:50:26.723558] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:08.452 [2024-10-25 17:50:26.723750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67016 ] 00:09:08.711 [2024-10-25 17:50:26.898569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.711 [2024-10-25 17:50:27.004329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.971 [2024-10-25 17:50:27.197340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.971 [2024-10-25 17:50:27.197476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 BaseBdev1_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 true 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 [2024-10-25 17:50:27.611603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.231 [2024-10-25 17:50:27.611697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.231 [2024-10-25 17:50:27.611733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.231 [2024-10-25 17:50:27.611762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.231 [2024-10-25 17:50:27.613700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.231 [2024-10-25 17:50:27.613779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.231 BaseBdev1 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 BaseBdev2_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.231 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 true 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 [2024-10-25 17:50:27.677340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.493 [2024-10-25 17:50:27.677392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.493 [2024-10-25 17:50:27.677408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.493 [2024-10-25 17:50:27.677418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.493 [2024-10-25 17:50:27.679308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.493 [2024-10-25 17:50:27.679351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.493 BaseBdev2 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 BaseBdev3_malloc 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 true 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 [2024-10-25 17:50:27.776678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:09.493 [2024-10-25 17:50:27.776768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.493 [2024-10-25 17:50:27.776805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:09.493 [2024-10-25 17:50:27.776872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.493 [2024-10-25 17:50:27.778877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.493 [2024-10-25 17:50:27.778948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:09.493 BaseBdev3 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 [2024-10-25 17:50:27.788731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.493 [2024-10-25 17:50:27.790495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.493 [2024-10-25 17:50:27.790571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.493 [2024-10-25 17:50:27.790761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:09.493 [2024-10-25 17:50:27.790773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:09.493 [2024-10-25 17:50:27.791026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:09.493 [2024-10-25 17:50:27.791171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:09.493 [2024-10-25 17:50:27.791184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:09.493 [2024-10-25 17:50:27.791323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.493 "name": "raid_bdev1", 00:09:09.493 "uuid": "a2f299ed-12be-46e7-9d2f-2ce14fb917aa", 00:09:09.493 "strip_size_kb": 64, 00:09:09.493 "state": "online", 00:09:09.493 "raid_level": "concat", 00:09:09.493 "superblock": true, 00:09:09.493 "num_base_bdevs": 3, 00:09:09.493 "num_base_bdevs_discovered": 3, 00:09:09.493 "num_base_bdevs_operational": 3, 00:09:09.493 "base_bdevs_list": [ 00:09:09.493 { 00:09:09.493 "name": "BaseBdev1", 00:09:09.493 "uuid": "dc13bd24-e573-5b1c-a660-158fb0675a87", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 }, 00:09:09.493 { 00:09:09.493 "name": "BaseBdev2", 00:09:09.493 "uuid": "9b81dda6-82e1-5b70-b4e6-0b14a808348a", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 }, 00:09:09.493 { 00:09:09.493 "name": "BaseBdev3", 00:09:09.493 "uuid": "862622d7-7ffa-53ed-8e00-81498361e1b6", 00:09:09.493 "is_configured": true, 00:09:09.493 "data_offset": 2048, 00:09:09.493 "data_size": 63488 00:09:09.493 } 00:09:09.493 ] 00:09:09.493 }' 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.493 17:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.063 17:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.063 17:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.063 [2024-10-25 17:50:28.305284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:11.002 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.002 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.002 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.002 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.003 "name": "raid_bdev1", 00:09:11.003 "uuid": "a2f299ed-12be-46e7-9d2f-2ce14fb917aa", 00:09:11.003 "strip_size_kb": 64, 00:09:11.003 "state": "online", 00:09:11.003 "raid_level": "concat", 00:09:11.003 "superblock": true, 00:09:11.003 "num_base_bdevs": 3, 00:09:11.003 "num_base_bdevs_discovered": 3, 00:09:11.003 "num_base_bdevs_operational": 3, 00:09:11.003 "base_bdevs_list": [ 00:09:11.003 { 00:09:11.003 "name": "BaseBdev1", 00:09:11.003 "uuid": "dc13bd24-e573-5b1c-a660-158fb0675a87", 00:09:11.003 "is_configured": true, 00:09:11.003 "data_offset": 2048, 00:09:11.003 "data_size": 63488 00:09:11.003 }, 00:09:11.003 { 00:09:11.003 "name": "BaseBdev2", 00:09:11.003 "uuid": "9b81dda6-82e1-5b70-b4e6-0b14a808348a", 00:09:11.003 "is_configured": true, 00:09:11.003 "data_offset": 2048, 00:09:11.003 "data_size": 63488 00:09:11.003 }, 00:09:11.003 { 00:09:11.003 "name": "BaseBdev3", 00:09:11.003 "uuid": "862622d7-7ffa-53ed-8e00-81498361e1b6", 00:09:11.003 "is_configured": true, 00:09:11.003 "data_offset": 2048, 00:09:11.003 "data_size": 63488 00:09:11.003 } 00:09:11.003 ] 00:09:11.003 }' 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.003 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.262 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.262 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.262 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.262 [2024-10-25 17:50:29.693118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.262 [2024-10-25 17:50:29.693235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.262 [2024-10-25 17:50:29.695699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.262 [2024-10-25 17:50:29.695739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.262 [2024-10-25 17:50:29.695774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.262 [2024-10-25 17:50:29.695785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:11.522 { 00:09:11.522 "results": [ 00:09:11.522 { 00:09:11.522 "job": "raid_bdev1", 00:09:11.522 "core_mask": "0x1", 00:09:11.522 "workload": "randrw", 00:09:11.522 "percentage": 50, 00:09:11.522 "status": "finished", 00:09:11.522 "queue_depth": 1, 00:09:11.522 "io_size": 131072, 00:09:11.522 "runtime": 1.388914, 00:09:11.522 "iops": 16862.815120302625, 00:09:11.522 "mibps": 2107.851890037828, 00:09:11.522 "io_failed": 1, 00:09:11.522 "io_timeout": 0, 00:09:11.522 "avg_latency_us": 82.44706626360691, 00:09:11.522 "min_latency_us": 24.146724890829695, 00:09:11.522 "max_latency_us": 1352.216593886463 00:09:11.522 } 00:09:11.522 ], 00:09:11.522 "core_count": 1 00:09:11.522 } 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67016 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67016 ']' 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67016 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67016 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.522 killing process with pid 67016 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67016' 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67016 00:09:11.522 [2024-10-25 17:50:29.746249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.522 17:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67016 00:09:11.781 [2024-10-25 17:50:29.970450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tOTKC0QRPE 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:12.720 00:09:12.720 real 0m4.484s 00:09:12.720 user 0m5.328s 00:09:12.720 sys 0m0.573s 00:09:12.720 ************************************ 00:09:12.720 END TEST raid_write_error_test 00:09:12.720 ************************************ 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.720 17:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.981 17:50:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:12.981 17:50:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:12.981 17:50:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.981 17:50:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.981 17:50:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.981 ************************************ 00:09:12.981 START TEST raid_state_function_test 00:09:12.981 ************************************ 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:12.981 Process raid pid: 67159 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67159 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67159' 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67159 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67159 ']' 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.981 17:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.981 [2024-10-25 17:50:31.280077] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:12.981 [2024-10-25 17:50:31.280886] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.241 [2024-10-25 17:50:31.460876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.241 [2024-10-25 17:50:31.574942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.500 [2024-10-25 17:50:31.775200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.500 [2024-10-25 17:50:31.775323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.760 [2024-10-25 17:50:32.108989] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.760 [2024-10-25 17:50:32.109042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.760 [2024-10-25 17:50:32.109053] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.760 [2024-10-25 17:50:32.109062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.760 [2024-10-25 17:50:32.109068] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.760 [2024-10-25 17:50:32.109077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.760 "name": "Existed_Raid", 00:09:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.760 "strip_size_kb": 0, 00:09:13.760 "state": "configuring", 00:09:13.760 "raid_level": "raid1", 00:09:13.760 "superblock": false, 00:09:13.760 "num_base_bdevs": 3, 00:09:13.760 "num_base_bdevs_discovered": 0, 00:09:13.760 "num_base_bdevs_operational": 3, 00:09:13.760 "base_bdevs_list": [ 00:09:13.760 { 00:09:13.760 "name": "BaseBdev1", 00:09:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.760 "is_configured": false, 00:09:13.760 "data_offset": 0, 00:09:13.760 "data_size": 0 00:09:13.760 }, 00:09:13.760 { 00:09:13.760 "name": "BaseBdev2", 00:09:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.760 "is_configured": false, 00:09:13.760 "data_offset": 0, 00:09:13.760 "data_size": 0 00:09:13.760 }, 00:09:13.760 { 00:09:13.760 "name": "BaseBdev3", 00:09:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.760 "is_configured": false, 00:09:13.760 "data_offset": 0, 00:09:13.760 "data_size": 0 00:09:13.760 } 00:09:13.760 ] 00:09:13.760 }' 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.760 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 [2024-10-25 17:50:32.492323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.330 [2024-10-25 17:50:32.492423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 [2024-10-25 17:50:32.504262] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.330 [2024-10-25 17:50:32.504348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.330 [2024-10-25 17:50:32.504375] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.330 [2024-10-25 17:50:32.504397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.330 [2024-10-25 17:50:32.504415] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.330 [2024-10-25 17:50:32.504435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 [2024-10-25 17:50:32.549285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.330 BaseBdev1 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.330 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 [ 00:09:14.330 { 00:09:14.330 "name": "BaseBdev1", 00:09:14.330 "aliases": [ 00:09:14.330 "38080070-1c53-4580-82bc-6e775e709416" 00:09:14.330 ], 00:09:14.330 "product_name": "Malloc disk", 00:09:14.330 "block_size": 512, 00:09:14.330 "num_blocks": 65536, 00:09:14.330 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:14.330 "assigned_rate_limits": { 00:09:14.330 "rw_ios_per_sec": 0, 00:09:14.330 "rw_mbytes_per_sec": 0, 00:09:14.330 "r_mbytes_per_sec": 0, 00:09:14.330 "w_mbytes_per_sec": 0 00:09:14.330 }, 00:09:14.330 "claimed": true, 00:09:14.330 "claim_type": "exclusive_write", 00:09:14.330 "zoned": false, 00:09:14.330 "supported_io_types": { 00:09:14.330 "read": true, 00:09:14.330 "write": true, 00:09:14.330 "unmap": true, 00:09:14.330 "flush": true, 00:09:14.330 "reset": true, 00:09:14.330 "nvme_admin": false, 00:09:14.330 "nvme_io": false, 00:09:14.330 "nvme_io_md": false, 00:09:14.330 "write_zeroes": true, 00:09:14.330 "zcopy": true, 00:09:14.330 "get_zone_info": false, 00:09:14.330 "zone_management": false, 00:09:14.330 "zone_append": false, 00:09:14.330 "compare": false, 00:09:14.330 "compare_and_write": false, 00:09:14.330 "abort": true, 00:09:14.330 "seek_hole": false, 00:09:14.330 "seek_data": false, 00:09:14.330 "copy": true, 00:09:14.330 "nvme_iov_md": false 00:09:14.330 }, 00:09:14.330 "memory_domains": [ 00:09:14.330 { 00:09:14.330 "dma_device_id": "system", 00:09:14.330 "dma_device_type": 1 00:09:14.330 }, 00:09:14.330 { 00:09:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.330 "dma_device_type": 2 00:09:14.330 } 00:09:14.330 ], 00:09:14.330 "driver_specific": {} 00:09:14.331 } 00:09:14.331 ] 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.331 "name": "Existed_Raid", 00:09:14.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.331 "strip_size_kb": 0, 00:09:14.331 "state": "configuring", 00:09:14.331 "raid_level": "raid1", 00:09:14.331 "superblock": false, 00:09:14.331 "num_base_bdevs": 3, 00:09:14.331 "num_base_bdevs_discovered": 1, 00:09:14.331 "num_base_bdevs_operational": 3, 00:09:14.331 "base_bdevs_list": [ 00:09:14.331 { 00:09:14.331 "name": "BaseBdev1", 00:09:14.331 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:14.331 "is_configured": true, 00:09:14.331 "data_offset": 0, 00:09:14.331 "data_size": 65536 00:09:14.331 }, 00:09:14.331 { 00:09:14.331 "name": "BaseBdev2", 00:09:14.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.331 "is_configured": false, 00:09:14.331 "data_offset": 0, 00:09:14.331 "data_size": 0 00:09:14.331 }, 00:09:14.331 { 00:09:14.331 "name": "BaseBdev3", 00:09:14.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.331 "is_configured": false, 00:09:14.331 "data_offset": 0, 00:09:14.331 "data_size": 0 00:09:14.331 } 00:09:14.331 ] 00:09:14.331 }' 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.331 17:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.901 [2024-10-25 17:50:33.060450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.901 [2024-10-25 17:50:33.060573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.901 [2024-10-25 17:50:33.068473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.901 [2024-10-25 17:50:33.070238] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.901 [2024-10-25 17:50:33.070314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.901 [2024-10-25 17:50:33.070342] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.901 [2024-10-25 17:50:33.070364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.901 "name": "Existed_Raid", 00:09:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.901 "strip_size_kb": 0, 00:09:14.901 "state": "configuring", 00:09:14.901 "raid_level": "raid1", 00:09:14.901 "superblock": false, 00:09:14.901 "num_base_bdevs": 3, 00:09:14.901 "num_base_bdevs_discovered": 1, 00:09:14.901 "num_base_bdevs_operational": 3, 00:09:14.901 "base_bdevs_list": [ 00:09:14.901 { 00:09:14.901 "name": "BaseBdev1", 00:09:14.901 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:14.901 "is_configured": true, 00:09:14.901 "data_offset": 0, 00:09:14.901 "data_size": 65536 00:09:14.901 }, 00:09:14.901 { 00:09:14.901 "name": "BaseBdev2", 00:09:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.901 "is_configured": false, 00:09:14.901 "data_offset": 0, 00:09:14.901 "data_size": 0 00:09:14.901 }, 00:09:14.901 { 00:09:14.901 "name": "BaseBdev3", 00:09:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.901 "is_configured": false, 00:09:14.901 "data_offset": 0, 00:09:14.901 "data_size": 0 00:09:14.901 } 00:09:14.901 ] 00:09:14.901 }' 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.901 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 [2024-10-25 17:50:33.526400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.162 BaseBdev2 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 [ 00:09:15.162 { 00:09:15.162 "name": "BaseBdev2", 00:09:15.162 "aliases": [ 00:09:15.162 "9dd764eb-2915-4d07-b78f-db012a7d6630" 00:09:15.162 ], 00:09:15.162 "product_name": "Malloc disk", 00:09:15.162 "block_size": 512, 00:09:15.162 "num_blocks": 65536, 00:09:15.162 "uuid": "9dd764eb-2915-4d07-b78f-db012a7d6630", 00:09:15.162 "assigned_rate_limits": { 00:09:15.162 "rw_ios_per_sec": 0, 00:09:15.162 "rw_mbytes_per_sec": 0, 00:09:15.162 "r_mbytes_per_sec": 0, 00:09:15.162 "w_mbytes_per_sec": 0 00:09:15.162 }, 00:09:15.162 "claimed": true, 00:09:15.162 "claim_type": "exclusive_write", 00:09:15.162 "zoned": false, 00:09:15.162 "supported_io_types": { 00:09:15.162 "read": true, 00:09:15.162 "write": true, 00:09:15.162 "unmap": true, 00:09:15.162 "flush": true, 00:09:15.162 "reset": true, 00:09:15.162 "nvme_admin": false, 00:09:15.162 "nvme_io": false, 00:09:15.162 "nvme_io_md": false, 00:09:15.162 "write_zeroes": true, 00:09:15.162 "zcopy": true, 00:09:15.162 "get_zone_info": false, 00:09:15.162 "zone_management": false, 00:09:15.162 "zone_append": false, 00:09:15.162 "compare": false, 00:09:15.162 "compare_and_write": false, 00:09:15.162 "abort": true, 00:09:15.162 "seek_hole": false, 00:09:15.162 "seek_data": false, 00:09:15.162 "copy": true, 00:09:15.162 "nvme_iov_md": false 00:09:15.162 }, 00:09:15.162 "memory_domains": [ 00:09:15.162 { 00:09:15.162 "dma_device_id": "system", 00:09:15.162 "dma_device_type": 1 00:09:15.162 }, 00:09:15.162 { 00:09:15.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.162 "dma_device_type": 2 00:09:15.162 } 00:09:15.162 ], 00:09:15.162 "driver_specific": {} 00:09:15.162 } 00:09:15.162 ] 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.423 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.423 "name": "Existed_Raid", 00:09:15.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.423 "strip_size_kb": 0, 00:09:15.423 "state": "configuring", 00:09:15.423 "raid_level": "raid1", 00:09:15.423 "superblock": false, 00:09:15.423 "num_base_bdevs": 3, 00:09:15.423 "num_base_bdevs_discovered": 2, 00:09:15.423 "num_base_bdevs_operational": 3, 00:09:15.423 "base_bdevs_list": [ 00:09:15.423 { 00:09:15.423 "name": "BaseBdev1", 00:09:15.423 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:15.423 "is_configured": true, 00:09:15.423 "data_offset": 0, 00:09:15.423 "data_size": 65536 00:09:15.423 }, 00:09:15.423 { 00:09:15.423 "name": "BaseBdev2", 00:09:15.423 "uuid": "9dd764eb-2915-4d07-b78f-db012a7d6630", 00:09:15.423 "is_configured": true, 00:09:15.423 "data_offset": 0, 00:09:15.423 "data_size": 65536 00:09:15.423 }, 00:09:15.423 { 00:09:15.423 "name": "BaseBdev3", 00:09:15.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.423 "is_configured": false, 00:09:15.423 "data_offset": 0, 00:09:15.423 "data_size": 0 00:09:15.423 } 00:09:15.423 ] 00:09:15.423 }' 00:09:15.423 17:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.423 17:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [2024-10-25 17:50:34.068446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.711 [2024-10-25 17:50:34.068565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.711 [2024-10-25 17:50:34.068584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.711 [2024-10-25 17:50:34.068877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.711 [2024-10-25 17:50:34.069049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.711 [2024-10-25 17:50:34.069058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.711 [2024-10-25 17:50:34.069333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.711 BaseBdev3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [ 00:09:15.711 { 00:09:15.711 "name": "BaseBdev3", 00:09:15.711 "aliases": [ 00:09:15.711 "fd645183-5f8c-47c1-80ad-55f036123584" 00:09:15.711 ], 00:09:15.711 "product_name": "Malloc disk", 00:09:15.711 "block_size": 512, 00:09:15.711 "num_blocks": 65536, 00:09:15.711 "uuid": "fd645183-5f8c-47c1-80ad-55f036123584", 00:09:15.711 "assigned_rate_limits": { 00:09:15.711 "rw_ios_per_sec": 0, 00:09:15.711 "rw_mbytes_per_sec": 0, 00:09:15.711 "r_mbytes_per_sec": 0, 00:09:15.711 "w_mbytes_per_sec": 0 00:09:15.711 }, 00:09:15.711 "claimed": true, 00:09:15.711 "claim_type": "exclusive_write", 00:09:15.711 "zoned": false, 00:09:15.711 "supported_io_types": { 00:09:15.711 "read": true, 00:09:15.711 "write": true, 00:09:15.711 "unmap": true, 00:09:15.711 "flush": true, 00:09:15.711 "reset": true, 00:09:15.711 "nvme_admin": false, 00:09:15.711 "nvme_io": false, 00:09:15.711 "nvme_io_md": false, 00:09:15.711 "write_zeroes": true, 00:09:15.711 "zcopy": true, 00:09:15.711 "get_zone_info": false, 00:09:15.711 "zone_management": false, 00:09:15.711 "zone_append": false, 00:09:15.711 "compare": false, 00:09:15.711 "compare_and_write": false, 00:09:15.711 "abort": true, 00:09:15.711 "seek_hole": false, 00:09:15.711 "seek_data": false, 00:09:15.711 "copy": true, 00:09:15.711 "nvme_iov_md": false 00:09:15.711 }, 00:09:15.711 "memory_domains": [ 00:09:15.711 { 00:09:15.711 "dma_device_id": "system", 00:09:15.711 "dma_device_type": 1 00:09:15.711 }, 00:09:15.711 { 00:09:15.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.711 "dma_device_type": 2 00:09:15.711 } 00:09:15.711 ], 00:09:15.711 "driver_specific": {} 00:09:15.711 } 00:09:15.711 ] 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.983 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.983 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.983 "name": "Existed_Raid", 00:09:15.983 "uuid": "b4373d08-71d2-44a5-8258-15a4979ae9c3", 00:09:15.983 "strip_size_kb": 0, 00:09:15.983 "state": "online", 00:09:15.983 "raid_level": "raid1", 00:09:15.983 "superblock": false, 00:09:15.983 "num_base_bdevs": 3, 00:09:15.983 "num_base_bdevs_discovered": 3, 00:09:15.983 "num_base_bdevs_operational": 3, 00:09:15.983 "base_bdevs_list": [ 00:09:15.983 { 00:09:15.983 "name": "BaseBdev1", 00:09:15.983 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:15.983 "is_configured": true, 00:09:15.983 "data_offset": 0, 00:09:15.983 "data_size": 65536 00:09:15.983 }, 00:09:15.983 { 00:09:15.983 "name": "BaseBdev2", 00:09:15.983 "uuid": "9dd764eb-2915-4d07-b78f-db012a7d6630", 00:09:15.983 "is_configured": true, 00:09:15.983 "data_offset": 0, 00:09:15.983 "data_size": 65536 00:09:15.983 }, 00:09:15.983 { 00:09:15.983 "name": "BaseBdev3", 00:09:15.983 "uuid": "fd645183-5f8c-47c1-80ad-55f036123584", 00:09:15.983 "is_configured": true, 00:09:15.983 "data_offset": 0, 00:09:15.983 "data_size": 65536 00:09:15.983 } 00:09:15.983 ] 00:09:15.983 }' 00:09:15.983 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.983 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.243 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.243 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.244 [2024-10-25 17:50:34.548041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.244 "name": "Existed_Raid", 00:09:16.244 "aliases": [ 00:09:16.244 "b4373d08-71d2-44a5-8258-15a4979ae9c3" 00:09:16.244 ], 00:09:16.244 "product_name": "Raid Volume", 00:09:16.244 "block_size": 512, 00:09:16.244 "num_blocks": 65536, 00:09:16.244 "uuid": "b4373d08-71d2-44a5-8258-15a4979ae9c3", 00:09:16.244 "assigned_rate_limits": { 00:09:16.244 "rw_ios_per_sec": 0, 00:09:16.244 "rw_mbytes_per_sec": 0, 00:09:16.244 "r_mbytes_per_sec": 0, 00:09:16.244 "w_mbytes_per_sec": 0 00:09:16.244 }, 00:09:16.244 "claimed": false, 00:09:16.244 "zoned": false, 00:09:16.244 "supported_io_types": { 00:09:16.244 "read": true, 00:09:16.244 "write": true, 00:09:16.244 "unmap": false, 00:09:16.244 "flush": false, 00:09:16.244 "reset": true, 00:09:16.244 "nvme_admin": false, 00:09:16.244 "nvme_io": false, 00:09:16.244 "nvme_io_md": false, 00:09:16.244 "write_zeroes": true, 00:09:16.244 "zcopy": false, 00:09:16.244 "get_zone_info": false, 00:09:16.244 "zone_management": false, 00:09:16.244 "zone_append": false, 00:09:16.244 "compare": false, 00:09:16.244 "compare_and_write": false, 00:09:16.244 "abort": false, 00:09:16.244 "seek_hole": false, 00:09:16.244 "seek_data": false, 00:09:16.244 "copy": false, 00:09:16.244 "nvme_iov_md": false 00:09:16.244 }, 00:09:16.244 "memory_domains": [ 00:09:16.244 { 00:09:16.244 "dma_device_id": "system", 00:09:16.244 "dma_device_type": 1 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.244 "dma_device_type": 2 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "dma_device_id": "system", 00:09:16.244 "dma_device_type": 1 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.244 "dma_device_type": 2 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "dma_device_id": "system", 00:09:16.244 "dma_device_type": 1 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.244 "dma_device_type": 2 00:09:16.244 } 00:09:16.244 ], 00:09:16.244 "driver_specific": { 00:09:16.244 "raid": { 00:09:16.244 "uuid": "b4373d08-71d2-44a5-8258-15a4979ae9c3", 00:09:16.244 "strip_size_kb": 0, 00:09:16.244 "state": "online", 00:09:16.244 "raid_level": "raid1", 00:09:16.244 "superblock": false, 00:09:16.244 "num_base_bdevs": 3, 00:09:16.244 "num_base_bdevs_discovered": 3, 00:09:16.244 "num_base_bdevs_operational": 3, 00:09:16.244 "base_bdevs_list": [ 00:09:16.244 { 00:09:16.244 "name": "BaseBdev1", 00:09:16.244 "uuid": "38080070-1c53-4580-82bc-6e775e709416", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 0, 00:09:16.244 "data_size": 65536 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "name": "BaseBdev2", 00:09:16.244 "uuid": "9dd764eb-2915-4d07-b78f-db012a7d6630", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 0, 00:09:16.244 "data_size": 65536 00:09:16.244 }, 00:09:16.244 { 00:09:16.244 "name": "BaseBdev3", 00:09:16.244 "uuid": "fd645183-5f8c-47c1-80ad-55f036123584", 00:09:16.244 "is_configured": true, 00:09:16.244 "data_offset": 0, 00:09:16.244 "data_size": 65536 00:09:16.244 } 00:09:16.244 ] 00:09:16.244 } 00:09:16.244 } 00:09:16.244 }' 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.244 BaseBdev2 00:09:16.244 BaseBdev3' 00:09:16.244 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.504 [2024-10-25 17:50:34.827257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.504 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.764 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.764 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.764 "name": "Existed_Raid", 00:09:16.765 "uuid": "b4373d08-71d2-44a5-8258-15a4979ae9c3", 00:09:16.765 "strip_size_kb": 0, 00:09:16.765 "state": "online", 00:09:16.765 "raid_level": "raid1", 00:09:16.765 "superblock": false, 00:09:16.765 "num_base_bdevs": 3, 00:09:16.765 "num_base_bdevs_discovered": 2, 00:09:16.765 "num_base_bdevs_operational": 2, 00:09:16.765 "base_bdevs_list": [ 00:09:16.765 { 00:09:16.765 "name": null, 00:09:16.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.765 "is_configured": false, 00:09:16.765 "data_offset": 0, 00:09:16.765 "data_size": 65536 00:09:16.765 }, 00:09:16.765 { 00:09:16.765 "name": "BaseBdev2", 00:09:16.765 "uuid": "9dd764eb-2915-4d07-b78f-db012a7d6630", 00:09:16.765 "is_configured": true, 00:09:16.765 "data_offset": 0, 00:09:16.765 "data_size": 65536 00:09:16.765 }, 00:09:16.765 { 00:09:16.765 "name": "BaseBdev3", 00:09:16.765 "uuid": "fd645183-5f8c-47c1-80ad-55f036123584", 00:09:16.765 "is_configured": true, 00:09:16.765 "data_offset": 0, 00:09:16.765 "data_size": 65536 00:09:16.765 } 00:09:16.765 ] 00:09:16.765 }' 00:09:16.765 17:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.765 17:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.025 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.025 [2024-10-25 17:50:35.372445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.285 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.285 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.285 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.285 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 [2024-10-25 17:50:35.526610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.286 [2024-10-25 17:50:35.526758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.286 [2024-10-25 17:50:35.620081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.286 [2024-10-25 17:50:35.620213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.286 [2024-10-25 17:50:35.620256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 BaseBdev2 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.286 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 [ 00:09:17.547 { 00:09:17.547 "name": "BaseBdev2", 00:09:17.547 "aliases": [ 00:09:17.547 "3a5223be-716a-4b0a-8b76-fcee2e4299d2" 00:09:17.547 ], 00:09:17.547 "product_name": "Malloc disk", 00:09:17.547 "block_size": 512, 00:09:17.547 "num_blocks": 65536, 00:09:17.547 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:17.547 "assigned_rate_limits": { 00:09:17.547 "rw_ios_per_sec": 0, 00:09:17.547 "rw_mbytes_per_sec": 0, 00:09:17.547 "r_mbytes_per_sec": 0, 00:09:17.547 "w_mbytes_per_sec": 0 00:09:17.547 }, 00:09:17.547 "claimed": false, 00:09:17.547 "zoned": false, 00:09:17.547 "supported_io_types": { 00:09:17.547 "read": true, 00:09:17.547 "write": true, 00:09:17.547 "unmap": true, 00:09:17.547 "flush": true, 00:09:17.547 "reset": true, 00:09:17.547 "nvme_admin": false, 00:09:17.547 "nvme_io": false, 00:09:17.547 "nvme_io_md": false, 00:09:17.547 "write_zeroes": true, 00:09:17.547 "zcopy": true, 00:09:17.547 "get_zone_info": false, 00:09:17.547 "zone_management": false, 00:09:17.547 "zone_append": false, 00:09:17.547 "compare": false, 00:09:17.547 "compare_and_write": false, 00:09:17.547 "abort": true, 00:09:17.547 "seek_hole": false, 00:09:17.547 "seek_data": false, 00:09:17.547 "copy": true, 00:09:17.547 "nvme_iov_md": false 00:09:17.547 }, 00:09:17.547 "memory_domains": [ 00:09:17.547 { 00:09:17.547 "dma_device_id": "system", 00:09:17.547 "dma_device_type": 1 00:09:17.547 }, 00:09:17.547 { 00:09:17.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.547 "dma_device_type": 2 00:09:17.547 } 00:09:17.547 ], 00:09:17.547 "driver_specific": {} 00:09:17.547 } 00:09:17.547 ] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 BaseBdev3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 [ 00:09:17.547 { 00:09:17.547 "name": "BaseBdev3", 00:09:17.547 "aliases": [ 00:09:17.547 "6dad33c6-71c0-4122-8b42-a2c9932825e9" 00:09:17.547 ], 00:09:17.547 "product_name": "Malloc disk", 00:09:17.547 "block_size": 512, 00:09:17.547 "num_blocks": 65536, 00:09:17.547 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:17.547 "assigned_rate_limits": { 00:09:17.547 "rw_ios_per_sec": 0, 00:09:17.547 "rw_mbytes_per_sec": 0, 00:09:17.547 "r_mbytes_per_sec": 0, 00:09:17.547 "w_mbytes_per_sec": 0 00:09:17.547 }, 00:09:17.547 "claimed": false, 00:09:17.547 "zoned": false, 00:09:17.547 "supported_io_types": { 00:09:17.547 "read": true, 00:09:17.547 "write": true, 00:09:17.547 "unmap": true, 00:09:17.547 "flush": true, 00:09:17.547 "reset": true, 00:09:17.547 "nvme_admin": false, 00:09:17.547 "nvme_io": false, 00:09:17.547 "nvme_io_md": false, 00:09:17.547 "write_zeroes": true, 00:09:17.547 "zcopy": true, 00:09:17.547 "get_zone_info": false, 00:09:17.547 "zone_management": false, 00:09:17.547 "zone_append": false, 00:09:17.547 "compare": false, 00:09:17.547 "compare_and_write": false, 00:09:17.547 "abort": true, 00:09:17.547 "seek_hole": false, 00:09:17.547 "seek_data": false, 00:09:17.547 "copy": true, 00:09:17.547 "nvme_iov_md": false 00:09:17.547 }, 00:09:17.547 "memory_domains": [ 00:09:17.547 { 00:09:17.547 "dma_device_id": "system", 00:09:17.547 "dma_device_type": 1 00:09:17.547 }, 00:09:17.547 { 00:09:17.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.547 "dma_device_type": 2 00:09:17.547 } 00:09:17.547 ], 00:09:17.547 "driver_specific": {} 00:09:17.547 } 00:09:17.547 ] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.547 [2024-10-25 17:50:35.833463] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.547 [2024-10-25 17:50:35.833551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.547 [2024-10-25 17:50:35.833611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.547 [2024-10-25 17:50:35.835344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.547 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.548 "name": "Existed_Raid", 00:09:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.548 "strip_size_kb": 0, 00:09:17.548 "state": "configuring", 00:09:17.548 "raid_level": "raid1", 00:09:17.548 "superblock": false, 00:09:17.548 "num_base_bdevs": 3, 00:09:17.548 "num_base_bdevs_discovered": 2, 00:09:17.548 "num_base_bdevs_operational": 3, 00:09:17.548 "base_bdevs_list": [ 00:09:17.548 { 00:09:17.548 "name": "BaseBdev1", 00:09:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.548 "is_configured": false, 00:09:17.548 "data_offset": 0, 00:09:17.548 "data_size": 0 00:09:17.548 }, 00:09:17.548 { 00:09:17.548 "name": "BaseBdev2", 00:09:17.548 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:17.548 "is_configured": true, 00:09:17.548 "data_offset": 0, 00:09:17.548 "data_size": 65536 00:09:17.548 }, 00:09:17.548 { 00:09:17.548 "name": "BaseBdev3", 00:09:17.548 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:17.548 "is_configured": true, 00:09:17.548 "data_offset": 0, 00:09:17.548 "data_size": 65536 00:09:17.548 } 00:09:17.548 ] 00:09:17.548 }' 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.548 17:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.118 [2024-10-25 17:50:36.248778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.118 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.118 "name": "Existed_Raid", 00:09:18.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.118 "strip_size_kb": 0, 00:09:18.118 "state": "configuring", 00:09:18.118 "raid_level": "raid1", 00:09:18.118 "superblock": false, 00:09:18.118 "num_base_bdevs": 3, 00:09:18.118 "num_base_bdevs_discovered": 1, 00:09:18.118 "num_base_bdevs_operational": 3, 00:09:18.118 "base_bdevs_list": [ 00:09:18.118 { 00:09:18.118 "name": "BaseBdev1", 00:09:18.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.118 "is_configured": false, 00:09:18.118 "data_offset": 0, 00:09:18.118 "data_size": 0 00:09:18.118 }, 00:09:18.118 { 00:09:18.118 "name": null, 00:09:18.118 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:18.118 "is_configured": false, 00:09:18.118 "data_offset": 0, 00:09:18.118 "data_size": 65536 00:09:18.118 }, 00:09:18.119 { 00:09:18.119 "name": "BaseBdev3", 00:09:18.119 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:18.119 "is_configured": true, 00:09:18.119 "data_offset": 0, 00:09:18.119 "data_size": 65536 00:09:18.119 } 00:09:18.119 ] 00:09:18.119 }' 00:09:18.119 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.119 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 [2024-10-25 17:50:36.739796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.379 BaseBdev1 00:09:18.379 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.380 [ 00:09:18.380 { 00:09:18.380 "name": "BaseBdev1", 00:09:18.380 "aliases": [ 00:09:18.380 "71a49ea7-5b9d-4495-9322-d85edd919a01" 00:09:18.380 ], 00:09:18.380 "product_name": "Malloc disk", 00:09:18.380 "block_size": 512, 00:09:18.380 "num_blocks": 65536, 00:09:18.380 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:18.380 "assigned_rate_limits": { 00:09:18.380 "rw_ios_per_sec": 0, 00:09:18.380 "rw_mbytes_per_sec": 0, 00:09:18.380 "r_mbytes_per_sec": 0, 00:09:18.380 "w_mbytes_per_sec": 0 00:09:18.380 }, 00:09:18.380 "claimed": true, 00:09:18.380 "claim_type": "exclusive_write", 00:09:18.380 "zoned": false, 00:09:18.380 "supported_io_types": { 00:09:18.380 "read": true, 00:09:18.380 "write": true, 00:09:18.380 "unmap": true, 00:09:18.380 "flush": true, 00:09:18.380 "reset": true, 00:09:18.380 "nvme_admin": false, 00:09:18.380 "nvme_io": false, 00:09:18.380 "nvme_io_md": false, 00:09:18.380 "write_zeroes": true, 00:09:18.380 "zcopy": true, 00:09:18.380 "get_zone_info": false, 00:09:18.380 "zone_management": false, 00:09:18.380 "zone_append": false, 00:09:18.380 "compare": false, 00:09:18.380 "compare_and_write": false, 00:09:18.380 "abort": true, 00:09:18.380 "seek_hole": false, 00:09:18.380 "seek_data": false, 00:09:18.380 "copy": true, 00:09:18.380 "nvme_iov_md": false 00:09:18.380 }, 00:09:18.380 "memory_domains": [ 00:09:18.380 { 00:09:18.380 "dma_device_id": "system", 00:09:18.380 "dma_device_type": 1 00:09:18.380 }, 00:09:18.380 { 00:09:18.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.380 "dma_device_type": 2 00:09:18.380 } 00:09:18.380 ], 00:09:18.380 "driver_specific": {} 00:09:18.380 } 00:09:18.380 ] 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.380 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.640 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.640 "name": "Existed_Raid", 00:09:18.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.640 "strip_size_kb": 0, 00:09:18.640 "state": "configuring", 00:09:18.640 "raid_level": "raid1", 00:09:18.640 "superblock": false, 00:09:18.640 "num_base_bdevs": 3, 00:09:18.640 "num_base_bdevs_discovered": 2, 00:09:18.640 "num_base_bdevs_operational": 3, 00:09:18.640 "base_bdevs_list": [ 00:09:18.640 { 00:09:18.640 "name": "BaseBdev1", 00:09:18.640 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:18.640 "is_configured": true, 00:09:18.640 "data_offset": 0, 00:09:18.640 "data_size": 65536 00:09:18.640 }, 00:09:18.640 { 00:09:18.640 "name": null, 00:09:18.640 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:18.640 "is_configured": false, 00:09:18.640 "data_offset": 0, 00:09:18.640 "data_size": 65536 00:09:18.640 }, 00:09:18.640 { 00:09:18.640 "name": "BaseBdev3", 00:09:18.640 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:18.640 "is_configured": true, 00:09:18.640 "data_offset": 0, 00:09:18.640 "data_size": 65536 00:09:18.640 } 00:09:18.640 ] 00:09:18.640 }' 00:09:18.640 17:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.640 17:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.900 [2024-10-25 17:50:37.262913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.900 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.900 "name": "Existed_Raid", 00:09:18.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.900 "strip_size_kb": 0, 00:09:18.900 "state": "configuring", 00:09:18.900 "raid_level": "raid1", 00:09:18.900 "superblock": false, 00:09:18.900 "num_base_bdevs": 3, 00:09:18.900 "num_base_bdevs_discovered": 1, 00:09:18.900 "num_base_bdevs_operational": 3, 00:09:18.900 "base_bdevs_list": [ 00:09:18.900 { 00:09:18.900 "name": "BaseBdev1", 00:09:18.900 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:18.900 "is_configured": true, 00:09:18.900 "data_offset": 0, 00:09:18.900 "data_size": 65536 00:09:18.900 }, 00:09:18.900 { 00:09:18.900 "name": null, 00:09:18.900 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:18.900 "is_configured": false, 00:09:18.900 "data_offset": 0, 00:09:18.900 "data_size": 65536 00:09:18.900 }, 00:09:18.900 { 00:09:18.900 "name": null, 00:09:18.900 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:18.900 "is_configured": false, 00:09:18.900 "data_offset": 0, 00:09:18.901 "data_size": 65536 00:09:18.901 } 00:09:18.901 ] 00:09:18.901 }' 00:09:18.901 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.901 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.472 [2024-10-25 17:50:37.750237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.472 "name": "Existed_Raid", 00:09:19.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.472 "strip_size_kb": 0, 00:09:19.472 "state": "configuring", 00:09:19.472 "raid_level": "raid1", 00:09:19.472 "superblock": false, 00:09:19.472 "num_base_bdevs": 3, 00:09:19.472 "num_base_bdevs_discovered": 2, 00:09:19.472 "num_base_bdevs_operational": 3, 00:09:19.472 "base_bdevs_list": [ 00:09:19.472 { 00:09:19.472 "name": "BaseBdev1", 00:09:19.472 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:19.472 "is_configured": true, 00:09:19.472 "data_offset": 0, 00:09:19.472 "data_size": 65536 00:09:19.472 }, 00:09:19.472 { 00:09:19.472 "name": null, 00:09:19.472 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:19.472 "is_configured": false, 00:09:19.472 "data_offset": 0, 00:09:19.472 "data_size": 65536 00:09:19.472 }, 00:09:19.472 { 00:09:19.472 "name": "BaseBdev3", 00:09:19.472 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:19.472 "is_configured": true, 00:09:19.472 "data_offset": 0, 00:09:19.472 "data_size": 65536 00:09:19.472 } 00:09:19.472 ] 00:09:19.472 }' 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.472 17:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.732 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.732 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.732 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.732 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.732 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.993 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.994 [2024-10-25 17:50:38.181519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.994 "name": "Existed_Raid", 00:09:19.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.994 "strip_size_kb": 0, 00:09:19.994 "state": "configuring", 00:09:19.994 "raid_level": "raid1", 00:09:19.994 "superblock": false, 00:09:19.994 "num_base_bdevs": 3, 00:09:19.994 "num_base_bdevs_discovered": 1, 00:09:19.994 "num_base_bdevs_operational": 3, 00:09:19.994 "base_bdevs_list": [ 00:09:19.994 { 00:09:19.994 "name": null, 00:09:19.994 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:19.994 "is_configured": false, 00:09:19.994 "data_offset": 0, 00:09:19.994 "data_size": 65536 00:09:19.994 }, 00:09:19.994 { 00:09:19.994 "name": null, 00:09:19.994 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:19.994 "is_configured": false, 00:09:19.994 "data_offset": 0, 00:09:19.994 "data_size": 65536 00:09:19.994 }, 00:09:19.994 { 00:09:19.994 "name": "BaseBdev3", 00:09:19.994 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:19.994 "is_configured": true, 00:09:19.994 "data_offset": 0, 00:09:19.994 "data_size": 65536 00:09:19.994 } 00:09:19.994 ] 00:09:19.994 }' 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.994 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.565 [2024-10-25 17:50:38.789746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.565 "name": "Existed_Raid", 00:09:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.565 "strip_size_kb": 0, 00:09:20.565 "state": "configuring", 00:09:20.565 "raid_level": "raid1", 00:09:20.565 "superblock": false, 00:09:20.565 "num_base_bdevs": 3, 00:09:20.565 "num_base_bdevs_discovered": 2, 00:09:20.565 "num_base_bdevs_operational": 3, 00:09:20.565 "base_bdevs_list": [ 00:09:20.565 { 00:09:20.565 "name": null, 00:09:20.565 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:20.565 "is_configured": false, 00:09:20.565 "data_offset": 0, 00:09:20.565 "data_size": 65536 00:09:20.565 }, 00:09:20.565 { 00:09:20.565 "name": "BaseBdev2", 00:09:20.565 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:20.565 "is_configured": true, 00:09:20.565 "data_offset": 0, 00:09:20.565 "data_size": 65536 00:09:20.565 }, 00:09:20.565 { 00:09:20.565 "name": "BaseBdev3", 00:09:20.565 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:20.565 "is_configured": true, 00:09:20.565 "data_offset": 0, 00:09:20.565 "data_size": 65536 00:09:20.565 } 00:09:20.565 ] 00:09:20.565 }' 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.565 17:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.825 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71a49ea7-5b9d-4495-9322-d85edd919a01 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 [2024-10-25 17:50:39.337081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.086 [2024-10-25 17:50:39.337126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:21.086 [2024-10-25 17:50:39.337133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:21.086 [2024-10-25 17:50:39.337373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:21.086 [2024-10-25 17:50:39.337535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:21.086 [2024-10-25 17:50:39.337548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:21.086 [2024-10-25 17:50:39.337789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.086 NewBaseBdev 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 [ 00:09:21.086 { 00:09:21.086 "name": "NewBaseBdev", 00:09:21.086 "aliases": [ 00:09:21.086 "71a49ea7-5b9d-4495-9322-d85edd919a01" 00:09:21.086 ], 00:09:21.086 "product_name": "Malloc disk", 00:09:21.086 "block_size": 512, 00:09:21.086 "num_blocks": 65536, 00:09:21.086 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:21.086 "assigned_rate_limits": { 00:09:21.086 "rw_ios_per_sec": 0, 00:09:21.086 "rw_mbytes_per_sec": 0, 00:09:21.086 "r_mbytes_per_sec": 0, 00:09:21.086 "w_mbytes_per_sec": 0 00:09:21.086 }, 00:09:21.086 "claimed": true, 00:09:21.086 "claim_type": "exclusive_write", 00:09:21.086 "zoned": false, 00:09:21.086 "supported_io_types": { 00:09:21.086 "read": true, 00:09:21.086 "write": true, 00:09:21.086 "unmap": true, 00:09:21.086 "flush": true, 00:09:21.086 "reset": true, 00:09:21.086 "nvme_admin": false, 00:09:21.086 "nvme_io": false, 00:09:21.086 "nvme_io_md": false, 00:09:21.086 "write_zeroes": true, 00:09:21.086 "zcopy": true, 00:09:21.086 "get_zone_info": false, 00:09:21.086 "zone_management": false, 00:09:21.086 "zone_append": false, 00:09:21.086 "compare": false, 00:09:21.086 "compare_and_write": false, 00:09:21.086 "abort": true, 00:09:21.086 "seek_hole": false, 00:09:21.086 "seek_data": false, 00:09:21.086 "copy": true, 00:09:21.086 "nvme_iov_md": false 00:09:21.086 }, 00:09:21.086 "memory_domains": [ 00:09:21.086 { 00:09:21.086 "dma_device_id": "system", 00:09:21.086 "dma_device_type": 1 00:09:21.086 }, 00:09:21.086 { 00:09:21.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.086 "dma_device_type": 2 00:09:21.086 } 00:09:21.086 ], 00:09:21.086 "driver_specific": {} 00:09:21.086 } 00:09:21.086 ] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.086 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.086 "name": "Existed_Raid", 00:09:21.086 "uuid": "66705f87-ae6f-43ff-901a-3f497c58e761", 00:09:21.086 "strip_size_kb": 0, 00:09:21.086 "state": "online", 00:09:21.086 "raid_level": "raid1", 00:09:21.086 "superblock": false, 00:09:21.086 "num_base_bdevs": 3, 00:09:21.087 "num_base_bdevs_discovered": 3, 00:09:21.087 "num_base_bdevs_operational": 3, 00:09:21.087 "base_bdevs_list": [ 00:09:21.087 { 00:09:21.087 "name": "NewBaseBdev", 00:09:21.087 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:21.087 "is_configured": true, 00:09:21.087 "data_offset": 0, 00:09:21.087 "data_size": 65536 00:09:21.087 }, 00:09:21.087 { 00:09:21.087 "name": "BaseBdev2", 00:09:21.087 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:21.087 "is_configured": true, 00:09:21.087 "data_offset": 0, 00:09:21.087 "data_size": 65536 00:09:21.087 }, 00:09:21.087 { 00:09:21.087 "name": "BaseBdev3", 00:09:21.087 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:21.087 "is_configured": true, 00:09:21.087 "data_offset": 0, 00:09:21.087 "data_size": 65536 00:09:21.087 } 00:09:21.087 ] 00:09:21.087 }' 00:09:21.087 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.087 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.657 [2024-10-25 17:50:39.804673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.657 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.657 "name": "Existed_Raid", 00:09:21.657 "aliases": [ 00:09:21.657 "66705f87-ae6f-43ff-901a-3f497c58e761" 00:09:21.657 ], 00:09:21.657 "product_name": "Raid Volume", 00:09:21.657 "block_size": 512, 00:09:21.657 "num_blocks": 65536, 00:09:21.657 "uuid": "66705f87-ae6f-43ff-901a-3f497c58e761", 00:09:21.657 "assigned_rate_limits": { 00:09:21.657 "rw_ios_per_sec": 0, 00:09:21.657 "rw_mbytes_per_sec": 0, 00:09:21.657 "r_mbytes_per_sec": 0, 00:09:21.657 "w_mbytes_per_sec": 0 00:09:21.657 }, 00:09:21.657 "claimed": false, 00:09:21.657 "zoned": false, 00:09:21.657 "supported_io_types": { 00:09:21.657 "read": true, 00:09:21.657 "write": true, 00:09:21.657 "unmap": false, 00:09:21.657 "flush": false, 00:09:21.657 "reset": true, 00:09:21.657 "nvme_admin": false, 00:09:21.657 "nvme_io": false, 00:09:21.657 "nvme_io_md": false, 00:09:21.657 "write_zeroes": true, 00:09:21.657 "zcopy": false, 00:09:21.657 "get_zone_info": false, 00:09:21.657 "zone_management": false, 00:09:21.657 "zone_append": false, 00:09:21.657 "compare": false, 00:09:21.657 "compare_and_write": false, 00:09:21.657 "abort": false, 00:09:21.657 "seek_hole": false, 00:09:21.657 "seek_data": false, 00:09:21.657 "copy": false, 00:09:21.657 "nvme_iov_md": false 00:09:21.657 }, 00:09:21.657 "memory_domains": [ 00:09:21.657 { 00:09:21.657 "dma_device_id": "system", 00:09:21.657 "dma_device_type": 1 00:09:21.657 }, 00:09:21.657 { 00:09:21.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.657 "dma_device_type": 2 00:09:21.657 }, 00:09:21.657 { 00:09:21.657 "dma_device_id": "system", 00:09:21.657 "dma_device_type": 1 00:09:21.657 }, 00:09:21.657 { 00:09:21.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.658 "dma_device_type": 2 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "dma_device_id": "system", 00:09:21.658 "dma_device_type": 1 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.658 "dma_device_type": 2 00:09:21.658 } 00:09:21.658 ], 00:09:21.658 "driver_specific": { 00:09:21.658 "raid": { 00:09:21.658 "uuid": "66705f87-ae6f-43ff-901a-3f497c58e761", 00:09:21.658 "strip_size_kb": 0, 00:09:21.658 "state": "online", 00:09:21.658 "raid_level": "raid1", 00:09:21.658 "superblock": false, 00:09:21.658 "num_base_bdevs": 3, 00:09:21.658 "num_base_bdevs_discovered": 3, 00:09:21.658 "num_base_bdevs_operational": 3, 00:09:21.658 "base_bdevs_list": [ 00:09:21.658 { 00:09:21.658 "name": "NewBaseBdev", 00:09:21.658 "uuid": "71a49ea7-5b9d-4495-9322-d85edd919a01", 00:09:21.658 "is_configured": true, 00:09:21.658 "data_offset": 0, 00:09:21.658 "data_size": 65536 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "name": "BaseBdev2", 00:09:21.658 "uuid": "3a5223be-716a-4b0a-8b76-fcee2e4299d2", 00:09:21.658 "is_configured": true, 00:09:21.658 "data_offset": 0, 00:09:21.658 "data_size": 65536 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "name": "BaseBdev3", 00:09:21.658 "uuid": "6dad33c6-71c0-4122-8b42-a2c9932825e9", 00:09:21.658 "is_configured": true, 00:09:21.658 "data_offset": 0, 00:09:21.658 "data_size": 65536 00:09:21.658 } 00:09:21.658 ] 00:09:21.658 } 00:09:21.658 } 00:09:21.658 }' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.658 BaseBdev2 00:09:21.658 BaseBdev3' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.658 17:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 [2024-10-25 17:50:40.055973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.658 [2024-10-25 17:50:40.056043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.658 [2024-10-25 17:50:40.056133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.658 [2024-10-25 17:50:40.056430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.658 [2024-10-25 17:50:40.056483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67159 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67159 ']' 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67159 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.658 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67159 00:09:21.918 killing process with pid 67159 00:09:21.918 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.918 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.918 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67159' 00:09:21.918 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67159 00:09:21.918 [2024-10-25 17:50:40.105434] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.918 17:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67159 00:09:22.178 [2024-10-25 17:50:40.397385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.118 00:09:23.118 real 0m10.302s 00:09:23.118 user 0m16.336s 00:09:23.118 sys 0m1.918s 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.118 ************************************ 00:09:23.118 END TEST raid_state_function_test 00:09:23.118 ************************************ 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.118 17:50:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:23.118 17:50:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:23.118 17:50:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.118 17:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.118 ************************************ 00:09:23.118 START TEST raid_state_function_test_sb 00:09:23.118 ************************************ 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.118 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67776 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67776' 00:09:23.379 Process raid pid: 67776 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67776 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67776 ']' 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.379 17:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.379 [2024-10-25 17:50:41.653477] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:23.379 [2024-10-25 17:50:41.653611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.639 [2024-10-25 17:50:41.833351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.639 [2024-10-25 17:50:41.949643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.899 [2024-10-25 17:50:42.151093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.899 [2024-10-25 17:50:42.151214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.160 [2024-10-25 17:50:42.476425] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.160 [2024-10-25 17:50:42.476477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.160 [2024-10-25 17:50:42.476487] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.160 [2024-10-25 17:50:42.476497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.160 [2024-10-25 17:50:42.476503] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.160 [2024-10-25 17:50:42.476511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.160 "name": "Existed_Raid", 00:09:24.160 "uuid": "9a085276-4484-4b15-8afd-a9c13e130daf", 00:09:24.160 "strip_size_kb": 0, 00:09:24.160 "state": "configuring", 00:09:24.160 "raid_level": "raid1", 00:09:24.160 "superblock": true, 00:09:24.160 "num_base_bdevs": 3, 00:09:24.160 "num_base_bdevs_discovered": 0, 00:09:24.160 "num_base_bdevs_operational": 3, 00:09:24.160 "base_bdevs_list": [ 00:09:24.160 { 00:09:24.160 "name": "BaseBdev1", 00:09:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.160 "is_configured": false, 00:09:24.160 "data_offset": 0, 00:09:24.160 "data_size": 0 00:09:24.160 }, 00:09:24.160 { 00:09:24.160 "name": "BaseBdev2", 00:09:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.160 "is_configured": false, 00:09:24.160 "data_offset": 0, 00:09:24.160 "data_size": 0 00:09:24.160 }, 00:09:24.160 { 00:09:24.160 "name": "BaseBdev3", 00:09:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.160 "is_configured": false, 00:09:24.160 "data_offset": 0, 00:09:24.160 "data_size": 0 00:09:24.160 } 00:09:24.160 ] 00:09:24.160 }' 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.160 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.730 [2024-10-25 17:50:42.911670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.730 [2024-10-25 17:50:42.911747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.730 [2024-10-25 17:50:42.923657] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.730 [2024-10-25 17:50:42.923736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.730 [2024-10-25 17:50:42.923763] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.730 [2024-10-25 17:50:42.923785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.730 [2024-10-25 17:50:42.923802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.730 [2024-10-25 17:50:42.923823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.730 [2024-10-25 17:50:42.973244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.730 BaseBdev1 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.730 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.731 17:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 [ 00:09:24.731 { 00:09:24.731 "name": "BaseBdev1", 00:09:24.731 "aliases": [ 00:09:24.731 "78266e4c-1f2c-4b62-9d36-42395ce528bf" 00:09:24.731 ], 00:09:24.731 "product_name": "Malloc disk", 00:09:24.731 "block_size": 512, 00:09:24.731 "num_blocks": 65536, 00:09:24.731 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:24.731 "assigned_rate_limits": { 00:09:24.731 "rw_ios_per_sec": 0, 00:09:24.731 "rw_mbytes_per_sec": 0, 00:09:24.731 "r_mbytes_per_sec": 0, 00:09:24.731 "w_mbytes_per_sec": 0 00:09:24.731 }, 00:09:24.731 "claimed": true, 00:09:24.731 "claim_type": "exclusive_write", 00:09:24.731 "zoned": false, 00:09:24.731 "supported_io_types": { 00:09:24.731 "read": true, 00:09:24.731 "write": true, 00:09:24.731 "unmap": true, 00:09:24.731 "flush": true, 00:09:24.731 "reset": true, 00:09:24.731 "nvme_admin": false, 00:09:24.731 "nvme_io": false, 00:09:24.731 "nvme_io_md": false, 00:09:24.731 "write_zeroes": true, 00:09:24.731 "zcopy": true, 00:09:24.731 "get_zone_info": false, 00:09:24.731 "zone_management": false, 00:09:24.731 "zone_append": false, 00:09:24.731 "compare": false, 00:09:24.731 "compare_and_write": false, 00:09:24.731 "abort": true, 00:09:24.731 "seek_hole": false, 00:09:24.731 "seek_data": false, 00:09:24.731 "copy": true, 00:09:24.731 "nvme_iov_md": false 00:09:24.731 }, 00:09:24.731 "memory_domains": [ 00:09:24.731 { 00:09:24.731 "dma_device_id": "system", 00:09:24.731 "dma_device_type": 1 00:09:24.731 }, 00:09:24.731 { 00:09:24.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.731 "dma_device_type": 2 00:09:24.731 } 00:09:24.731 ], 00:09:24.731 "driver_specific": {} 00:09:24.731 } 00:09:24.731 ] 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.731 "name": "Existed_Raid", 00:09:24.731 "uuid": "6296cbab-f1ed-49b2-bcbc-f3efacdbba3f", 00:09:24.731 "strip_size_kb": 0, 00:09:24.731 "state": "configuring", 00:09:24.731 "raid_level": "raid1", 00:09:24.731 "superblock": true, 00:09:24.731 "num_base_bdevs": 3, 00:09:24.731 "num_base_bdevs_discovered": 1, 00:09:24.731 "num_base_bdevs_operational": 3, 00:09:24.731 "base_bdevs_list": [ 00:09:24.731 { 00:09:24.731 "name": "BaseBdev1", 00:09:24.731 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:24.731 "is_configured": true, 00:09:24.731 "data_offset": 2048, 00:09:24.731 "data_size": 63488 00:09:24.731 }, 00:09:24.731 { 00:09:24.731 "name": "BaseBdev2", 00:09:24.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.731 "is_configured": false, 00:09:24.731 "data_offset": 0, 00:09:24.731 "data_size": 0 00:09:24.731 }, 00:09:24.731 { 00:09:24.731 "name": "BaseBdev3", 00:09:24.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.731 "is_configured": false, 00:09:24.731 "data_offset": 0, 00:09:24.731 "data_size": 0 00:09:24.731 } 00:09:24.731 ] 00:09:24.731 }' 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.731 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 [2024-10-25 17:50:43.492342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.302 [2024-10-25 17:50:43.492435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 [2024-10-25 17:50:43.504396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.302 [2024-10-25 17:50:43.506117] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.302 [2024-10-25 17:50:43.506207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.302 [2024-10-25 17:50:43.506221] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.302 [2024-10-25 17:50:43.506230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.302 "name": "Existed_Raid", 00:09:25.302 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:25.302 "strip_size_kb": 0, 00:09:25.302 "state": "configuring", 00:09:25.302 "raid_level": "raid1", 00:09:25.302 "superblock": true, 00:09:25.302 "num_base_bdevs": 3, 00:09:25.302 "num_base_bdevs_discovered": 1, 00:09:25.302 "num_base_bdevs_operational": 3, 00:09:25.302 "base_bdevs_list": [ 00:09:25.302 { 00:09:25.302 "name": "BaseBdev1", 00:09:25.302 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:25.302 "is_configured": true, 00:09:25.302 "data_offset": 2048, 00:09:25.302 "data_size": 63488 00:09:25.302 }, 00:09:25.302 { 00:09:25.302 "name": "BaseBdev2", 00:09:25.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.302 "is_configured": false, 00:09:25.302 "data_offset": 0, 00:09:25.302 "data_size": 0 00:09:25.302 }, 00:09:25.302 { 00:09:25.302 "name": "BaseBdev3", 00:09:25.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.302 "is_configured": false, 00:09:25.302 "data_offset": 0, 00:09:25.302 "data_size": 0 00:09:25.302 } 00:09:25.302 ] 00:09:25.302 }' 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.302 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.562 [2024-10-25 17:50:43.933050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.562 BaseBdev2 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.562 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.562 [ 00:09:25.562 { 00:09:25.562 "name": "BaseBdev2", 00:09:25.562 "aliases": [ 00:09:25.562 "7ec59b7b-35b9-4955-a301-27c564c465a8" 00:09:25.562 ], 00:09:25.562 "product_name": "Malloc disk", 00:09:25.562 "block_size": 512, 00:09:25.562 "num_blocks": 65536, 00:09:25.562 "uuid": "7ec59b7b-35b9-4955-a301-27c564c465a8", 00:09:25.562 "assigned_rate_limits": { 00:09:25.562 "rw_ios_per_sec": 0, 00:09:25.562 "rw_mbytes_per_sec": 0, 00:09:25.562 "r_mbytes_per_sec": 0, 00:09:25.562 "w_mbytes_per_sec": 0 00:09:25.562 }, 00:09:25.562 "claimed": true, 00:09:25.562 "claim_type": "exclusive_write", 00:09:25.562 "zoned": false, 00:09:25.562 "supported_io_types": { 00:09:25.562 "read": true, 00:09:25.562 "write": true, 00:09:25.562 "unmap": true, 00:09:25.562 "flush": true, 00:09:25.562 "reset": true, 00:09:25.562 "nvme_admin": false, 00:09:25.563 "nvme_io": false, 00:09:25.563 "nvme_io_md": false, 00:09:25.563 "write_zeroes": true, 00:09:25.563 "zcopy": true, 00:09:25.563 "get_zone_info": false, 00:09:25.563 "zone_management": false, 00:09:25.563 "zone_append": false, 00:09:25.563 "compare": false, 00:09:25.563 "compare_and_write": false, 00:09:25.563 "abort": true, 00:09:25.563 "seek_hole": false, 00:09:25.563 "seek_data": false, 00:09:25.563 "copy": true, 00:09:25.563 "nvme_iov_md": false 00:09:25.563 }, 00:09:25.563 "memory_domains": [ 00:09:25.563 { 00:09:25.563 "dma_device_id": "system", 00:09:25.563 "dma_device_type": 1 00:09:25.563 }, 00:09:25.563 { 00:09:25.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.563 "dma_device_type": 2 00:09:25.563 } 00:09:25.563 ], 00:09:25.563 "driver_specific": {} 00:09:25.563 } 00:09:25.563 ] 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.563 17:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.823 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.823 "name": "Existed_Raid", 00:09:25.823 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:25.823 "strip_size_kb": 0, 00:09:25.823 "state": "configuring", 00:09:25.823 "raid_level": "raid1", 00:09:25.823 "superblock": true, 00:09:25.823 "num_base_bdevs": 3, 00:09:25.823 "num_base_bdevs_discovered": 2, 00:09:25.823 "num_base_bdevs_operational": 3, 00:09:25.823 "base_bdevs_list": [ 00:09:25.823 { 00:09:25.823 "name": "BaseBdev1", 00:09:25.823 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:25.823 "is_configured": true, 00:09:25.823 "data_offset": 2048, 00:09:25.823 "data_size": 63488 00:09:25.823 }, 00:09:25.823 { 00:09:25.823 "name": "BaseBdev2", 00:09:25.823 "uuid": "7ec59b7b-35b9-4955-a301-27c564c465a8", 00:09:25.823 "is_configured": true, 00:09:25.823 "data_offset": 2048, 00:09:25.823 "data_size": 63488 00:09:25.823 }, 00:09:25.823 { 00:09:25.823 "name": "BaseBdev3", 00:09:25.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.823 "is_configured": false, 00:09:25.823 "data_offset": 0, 00:09:25.823 "data_size": 0 00:09:25.823 } 00:09:25.823 ] 00:09:25.823 }' 00:09:25.823 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.823 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 [2024-10-25 17:50:44.435408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.083 [2024-10-25 17:50:44.435657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.083 [2024-10-25 17:50:44.435677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.083 [2024-10-25 17:50:44.435984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.083 [2024-10-25 17:50:44.436146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.083 [2024-10-25 17:50:44.436155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:26.083 BaseBdev3 00:09:26.083 [2024-10-25 17:50:44.436294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.083 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 [ 00:09:26.083 { 00:09:26.083 "name": "BaseBdev3", 00:09:26.083 "aliases": [ 00:09:26.083 "641c2db8-c155-4caa-a3c9-2026c2abede8" 00:09:26.083 ], 00:09:26.083 "product_name": "Malloc disk", 00:09:26.083 "block_size": 512, 00:09:26.083 "num_blocks": 65536, 00:09:26.083 "uuid": "641c2db8-c155-4caa-a3c9-2026c2abede8", 00:09:26.083 "assigned_rate_limits": { 00:09:26.083 "rw_ios_per_sec": 0, 00:09:26.083 "rw_mbytes_per_sec": 0, 00:09:26.083 "r_mbytes_per_sec": 0, 00:09:26.083 "w_mbytes_per_sec": 0 00:09:26.083 }, 00:09:26.083 "claimed": true, 00:09:26.083 "claim_type": "exclusive_write", 00:09:26.083 "zoned": false, 00:09:26.083 "supported_io_types": { 00:09:26.083 "read": true, 00:09:26.083 "write": true, 00:09:26.083 "unmap": true, 00:09:26.083 "flush": true, 00:09:26.083 "reset": true, 00:09:26.083 "nvme_admin": false, 00:09:26.083 "nvme_io": false, 00:09:26.083 "nvme_io_md": false, 00:09:26.083 "write_zeroes": true, 00:09:26.083 "zcopy": true, 00:09:26.083 "get_zone_info": false, 00:09:26.083 "zone_management": false, 00:09:26.083 "zone_append": false, 00:09:26.083 "compare": false, 00:09:26.083 "compare_and_write": false, 00:09:26.083 "abort": true, 00:09:26.083 "seek_hole": false, 00:09:26.083 "seek_data": false, 00:09:26.083 "copy": true, 00:09:26.083 "nvme_iov_md": false 00:09:26.083 }, 00:09:26.083 "memory_domains": [ 00:09:26.083 { 00:09:26.083 "dma_device_id": "system", 00:09:26.083 "dma_device_type": 1 00:09:26.084 }, 00:09:26.084 { 00:09:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.084 "dma_device_type": 2 00:09:26.084 } 00:09:26.084 ], 00:09:26.084 "driver_specific": {} 00:09:26.084 } 00:09:26.084 ] 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.084 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.344 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.344 "name": "Existed_Raid", 00:09:26.344 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:26.344 "strip_size_kb": 0, 00:09:26.344 "state": "online", 00:09:26.344 "raid_level": "raid1", 00:09:26.344 "superblock": true, 00:09:26.344 "num_base_bdevs": 3, 00:09:26.344 "num_base_bdevs_discovered": 3, 00:09:26.344 "num_base_bdevs_operational": 3, 00:09:26.344 "base_bdevs_list": [ 00:09:26.344 { 00:09:26.344 "name": "BaseBdev1", 00:09:26.344 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:26.344 "is_configured": true, 00:09:26.344 "data_offset": 2048, 00:09:26.344 "data_size": 63488 00:09:26.344 }, 00:09:26.344 { 00:09:26.344 "name": "BaseBdev2", 00:09:26.344 "uuid": "7ec59b7b-35b9-4955-a301-27c564c465a8", 00:09:26.344 "is_configured": true, 00:09:26.344 "data_offset": 2048, 00:09:26.344 "data_size": 63488 00:09:26.344 }, 00:09:26.344 { 00:09:26.344 "name": "BaseBdev3", 00:09:26.344 "uuid": "641c2db8-c155-4caa-a3c9-2026c2abede8", 00:09:26.344 "is_configured": true, 00:09:26.344 "data_offset": 2048, 00:09:26.344 "data_size": 63488 00:09:26.344 } 00:09:26.344 ] 00:09:26.344 }' 00:09:26.344 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.344 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.603 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.604 [2024-10-25 17:50:44.898943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.604 "name": "Existed_Raid", 00:09:26.604 "aliases": [ 00:09:26.604 "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3" 00:09:26.604 ], 00:09:26.604 "product_name": "Raid Volume", 00:09:26.604 "block_size": 512, 00:09:26.604 "num_blocks": 63488, 00:09:26.604 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:26.604 "assigned_rate_limits": { 00:09:26.604 "rw_ios_per_sec": 0, 00:09:26.604 "rw_mbytes_per_sec": 0, 00:09:26.604 "r_mbytes_per_sec": 0, 00:09:26.604 "w_mbytes_per_sec": 0 00:09:26.604 }, 00:09:26.604 "claimed": false, 00:09:26.604 "zoned": false, 00:09:26.604 "supported_io_types": { 00:09:26.604 "read": true, 00:09:26.604 "write": true, 00:09:26.604 "unmap": false, 00:09:26.604 "flush": false, 00:09:26.604 "reset": true, 00:09:26.604 "nvme_admin": false, 00:09:26.604 "nvme_io": false, 00:09:26.604 "nvme_io_md": false, 00:09:26.604 "write_zeroes": true, 00:09:26.604 "zcopy": false, 00:09:26.604 "get_zone_info": false, 00:09:26.604 "zone_management": false, 00:09:26.604 "zone_append": false, 00:09:26.604 "compare": false, 00:09:26.604 "compare_and_write": false, 00:09:26.604 "abort": false, 00:09:26.604 "seek_hole": false, 00:09:26.604 "seek_data": false, 00:09:26.604 "copy": false, 00:09:26.604 "nvme_iov_md": false 00:09:26.604 }, 00:09:26.604 "memory_domains": [ 00:09:26.604 { 00:09:26.604 "dma_device_id": "system", 00:09:26.604 "dma_device_type": 1 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.604 "dma_device_type": 2 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "dma_device_id": "system", 00:09:26.604 "dma_device_type": 1 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.604 "dma_device_type": 2 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "dma_device_id": "system", 00:09:26.604 "dma_device_type": 1 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.604 "dma_device_type": 2 00:09:26.604 } 00:09:26.604 ], 00:09:26.604 "driver_specific": { 00:09:26.604 "raid": { 00:09:26.604 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:26.604 "strip_size_kb": 0, 00:09:26.604 "state": "online", 00:09:26.604 "raid_level": "raid1", 00:09:26.604 "superblock": true, 00:09:26.604 "num_base_bdevs": 3, 00:09:26.604 "num_base_bdevs_discovered": 3, 00:09:26.604 "num_base_bdevs_operational": 3, 00:09:26.604 "base_bdevs_list": [ 00:09:26.604 { 00:09:26.604 "name": "BaseBdev1", 00:09:26.604 "uuid": "78266e4c-1f2c-4b62-9d36-42395ce528bf", 00:09:26.604 "is_configured": true, 00:09:26.604 "data_offset": 2048, 00:09:26.604 "data_size": 63488 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "name": "BaseBdev2", 00:09:26.604 "uuid": "7ec59b7b-35b9-4955-a301-27c564c465a8", 00:09:26.604 "is_configured": true, 00:09:26.604 "data_offset": 2048, 00:09:26.604 "data_size": 63488 00:09:26.604 }, 00:09:26.604 { 00:09:26.604 "name": "BaseBdev3", 00:09:26.604 "uuid": "641c2db8-c155-4caa-a3c9-2026c2abede8", 00:09:26.604 "is_configured": true, 00:09:26.604 "data_offset": 2048, 00:09:26.604 "data_size": 63488 00:09:26.604 } 00:09:26.604 ] 00:09:26.604 } 00:09:26.604 } 00:09:26.604 }' 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:26.604 BaseBdev2 00:09:26.604 BaseBdev3' 00:09:26.604 17:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.864 [2024-10-25 17:50:45.178220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.864 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.865 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.124 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.124 "name": "Existed_Raid", 00:09:27.124 "uuid": "38f4b332-a49e-4b9d-bb8e-81bdb28ba7c3", 00:09:27.124 "strip_size_kb": 0, 00:09:27.124 "state": "online", 00:09:27.124 "raid_level": "raid1", 00:09:27.124 "superblock": true, 00:09:27.124 "num_base_bdevs": 3, 00:09:27.124 "num_base_bdevs_discovered": 2, 00:09:27.124 "num_base_bdevs_operational": 2, 00:09:27.124 "base_bdevs_list": [ 00:09:27.124 { 00:09:27.124 "name": null, 00:09:27.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.124 "is_configured": false, 00:09:27.124 "data_offset": 0, 00:09:27.124 "data_size": 63488 00:09:27.124 }, 00:09:27.124 { 00:09:27.124 "name": "BaseBdev2", 00:09:27.124 "uuid": "7ec59b7b-35b9-4955-a301-27c564c465a8", 00:09:27.124 "is_configured": true, 00:09:27.124 "data_offset": 2048, 00:09:27.124 "data_size": 63488 00:09:27.124 }, 00:09:27.124 { 00:09:27.124 "name": "BaseBdev3", 00:09:27.124 "uuid": "641c2db8-c155-4caa-a3c9-2026c2abede8", 00:09:27.124 "is_configured": true, 00:09:27.124 "data_offset": 2048, 00:09:27.124 "data_size": 63488 00:09:27.124 } 00:09:27.124 ] 00:09:27.124 }' 00:09:27.124 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.124 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.384 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.385 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.385 [2024-10-25 17:50:45.752335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.645 [2024-10-25 17:50:45.901896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.645 [2024-10-25 17:50:45.902038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.645 [2024-10-25 17:50:45.996405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.645 [2024-10-25 17:50:45.996535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.645 [2024-10-25 17:50:45.996552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.645 17:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.645 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.906 BaseBdev2 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.906 [ 00:09:27.906 { 00:09:27.906 "name": "BaseBdev2", 00:09:27.906 "aliases": [ 00:09:27.906 "6f4fc055-1af4-4d96-82d5-e43fc0e4b779" 00:09:27.906 ], 00:09:27.906 "product_name": "Malloc disk", 00:09:27.906 "block_size": 512, 00:09:27.906 "num_blocks": 65536, 00:09:27.906 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:27.906 "assigned_rate_limits": { 00:09:27.906 "rw_ios_per_sec": 0, 00:09:27.906 "rw_mbytes_per_sec": 0, 00:09:27.906 "r_mbytes_per_sec": 0, 00:09:27.906 "w_mbytes_per_sec": 0 00:09:27.906 }, 00:09:27.906 "claimed": false, 00:09:27.906 "zoned": false, 00:09:27.906 "supported_io_types": { 00:09:27.906 "read": true, 00:09:27.906 "write": true, 00:09:27.906 "unmap": true, 00:09:27.906 "flush": true, 00:09:27.906 "reset": true, 00:09:27.906 "nvme_admin": false, 00:09:27.906 "nvme_io": false, 00:09:27.906 "nvme_io_md": false, 00:09:27.906 "write_zeroes": true, 00:09:27.906 "zcopy": true, 00:09:27.906 "get_zone_info": false, 00:09:27.906 "zone_management": false, 00:09:27.906 "zone_append": false, 00:09:27.906 "compare": false, 00:09:27.906 "compare_and_write": false, 00:09:27.906 "abort": true, 00:09:27.906 "seek_hole": false, 00:09:27.906 "seek_data": false, 00:09:27.906 "copy": true, 00:09:27.906 "nvme_iov_md": false 00:09:27.906 }, 00:09:27.906 "memory_domains": [ 00:09:27.906 { 00:09:27.906 "dma_device_id": "system", 00:09:27.906 "dma_device_type": 1 00:09:27.906 }, 00:09:27.906 { 00:09:27.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.906 "dma_device_type": 2 00:09:27.906 } 00:09:27.906 ], 00:09:27.906 "driver_specific": {} 00:09:27.906 } 00:09:27.906 ] 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.906 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 BaseBdev3 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 [ 00:09:27.907 { 00:09:27.907 "name": "BaseBdev3", 00:09:27.907 "aliases": [ 00:09:27.907 "9b5a3184-10c0-4d5e-aec0-75e8f50301ec" 00:09:27.907 ], 00:09:27.907 "product_name": "Malloc disk", 00:09:27.907 "block_size": 512, 00:09:27.907 "num_blocks": 65536, 00:09:27.907 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:27.907 "assigned_rate_limits": { 00:09:27.907 "rw_ios_per_sec": 0, 00:09:27.907 "rw_mbytes_per_sec": 0, 00:09:27.907 "r_mbytes_per_sec": 0, 00:09:27.907 "w_mbytes_per_sec": 0 00:09:27.907 }, 00:09:27.907 "claimed": false, 00:09:27.907 "zoned": false, 00:09:27.907 "supported_io_types": { 00:09:27.907 "read": true, 00:09:27.907 "write": true, 00:09:27.907 "unmap": true, 00:09:27.907 "flush": true, 00:09:27.907 "reset": true, 00:09:27.907 "nvme_admin": false, 00:09:27.907 "nvme_io": false, 00:09:27.907 "nvme_io_md": false, 00:09:27.907 "write_zeroes": true, 00:09:27.907 "zcopy": true, 00:09:27.907 "get_zone_info": false, 00:09:27.907 "zone_management": false, 00:09:27.907 "zone_append": false, 00:09:27.907 "compare": false, 00:09:27.907 "compare_and_write": false, 00:09:27.907 "abort": true, 00:09:27.907 "seek_hole": false, 00:09:27.907 "seek_data": false, 00:09:27.907 "copy": true, 00:09:27.907 "nvme_iov_md": false 00:09:27.907 }, 00:09:27.907 "memory_domains": [ 00:09:27.907 { 00:09:27.907 "dma_device_id": "system", 00:09:27.907 "dma_device_type": 1 00:09:27.907 }, 00:09:27.907 { 00:09:27.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.907 "dma_device_type": 2 00:09:27.907 } 00:09:27.907 ], 00:09:27.907 "driver_specific": {} 00:09:27.907 } 00:09:27.907 ] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 [2024-10-25 17:50:46.209100] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.907 [2024-10-25 17:50:46.209189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.907 [2024-10-25 17:50:46.209228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.907 [2024-10-25 17:50:46.211048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.907 "name": "Existed_Raid", 00:09:27.907 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:27.907 "strip_size_kb": 0, 00:09:27.907 "state": "configuring", 00:09:27.907 "raid_level": "raid1", 00:09:27.907 "superblock": true, 00:09:27.907 "num_base_bdevs": 3, 00:09:27.907 "num_base_bdevs_discovered": 2, 00:09:27.907 "num_base_bdevs_operational": 3, 00:09:27.907 "base_bdevs_list": [ 00:09:27.907 { 00:09:27.907 "name": "BaseBdev1", 00:09:27.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.907 "is_configured": false, 00:09:27.907 "data_offset": 0, 00:09:27.907 "data_size": 0 00:09:27.907 }, 00:09:27.907 { 00:09:27.907 "name": "BaseBdev2", 00:09:27.907 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:27.907 "is_configured": true, 00:09:27.907 "data_offset": 2048, 00:09:27.907 "data_size": 63488 00:09:27.907 }, 00:09:27.907 { 00:09:27.907 "name": "BaseBdev3", 00:09:27.907 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:27.907 "is_configured": true, 00:09:27.907 "data_offset": 2048, 00:09:27.907 "data_size": 63488 00:09:27.907 } 00:09:27.907 ] 00:09:27.907 }' 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.907 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.173 [2024-10-25 17:50:46.568580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.173 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.174 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.174 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.436 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.436 "name": "Existed_Raid", 00:09:28.436 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:28.436 "strip_size_kb": 0, 00:09:28.436 "state": "configuring", 00:09:28.436 "raid_level": "raid1", 00:09:28.436 "superblock": true, 00:09:28.436 "num_base_bdevs": 3, 00:09:28.436 "num_base_bdevs_discovered": 1, 00:09:28.436 "num_base_bdevs_operational": 3, 00:09:28.436 "base_bdevs_list": [ 00:09:28.436 { 00:09:28.436 "name": "BaseBdev1", 00:09:28.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.436 "is_configured": false, 00:09:28.436 "data_offset": 0, 00:09:28.436 "data_size": 0 00:09:28.436 }, 00:09:28.436 { 00:09:28.436 "name": null, 00:09:28.436 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:28.436 "is_configured": false, 00:09:28.436 "data_offset": 0, 00:09:28.436 "data_size": 63488 00:09:28.436 }, 00:09:28.436 { 00:09:28.436 "name": "BaseBdev3", 00:09:28.436 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:28.436 "is_configured": true, 00:09:28.436 "data_offset": 2048, 00:09:28.436 "data_size": 63488 00:09:28.436 } 00:09:28.436 ] 00:09:28.436 }' 00:09:28.436 17:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.436 17:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.696 [2024-10-25 17:50:47.071849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.696 BaseBdev1 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.696 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.696 [ 00:09:28.696 { 00:09:28.696 "name": "BaseBdev1", 00:09:28.696 "aliases": [ 00:09:28.696 "55a6cf1f-3144-41f5-928c-eb5c9dd52714" 00:09:28.696 ], 00:09:28.696 "product_name": "Malloc disk", 00:09:28.696 "block_size": 512, 00:09:28.696 "num_blocks": 65536, 00:09:28.696 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:28.696 "assigned_rate_limits": { 00:09:28.696 "rw_ios_per_sec": 0, 00:09:28.696 "rw_mbytes_per_sec": 0, 00:09:28.696 "r_mbytes_per_sec": 0, 00:09:28.696 "w_mbytes_per_sec": 0 00:09:28.696 }, 00:09:28.696 "claimed": true, 00:09:28.696 "claim_type": "exclusive_write", 00:09:28.696 "zoned": false, 00:09:28.696 "supported_io_types": { 00:09:28.696 "read": true, 00:09:28.696 "write": true, 00:09:28.696 "unmap": true, 00:09:28.696 "flush": true, 00:09:28.696 "reset": true, 00:09:28.696 "nvme_admin": false, 00:09:28.696 "nvme_io": false, 00:09:28.696 "nvme_io_md": false, 00:09:28.696 "write_zeroes": true, 00:09:28.696 "zcopy": true, 00:09:28.696 "get_zone_info": false, 00:09:28.696 "zone_management": false, 00:09:28.696 "zone_append": false, 00:09:28.696 "compare": false, 00:09:28.696 "compare_and_write": false, 00:09:28.697 "abort": true, 00:09:28.697 "seek_hole": false, 00:09:28.697 "seek_data": false, 00:09:28.697 "copy": true, 00:09:28.697 "nvme_iov_md": false 00:09:28.697 }, 00:09:28.697 "memory_domains": [ 00:09:28.697 { 00:09:28.697 "dma_device_id": "system", 00:09:28.697 "dma_device_type": 1 00:09:28.697 }, 00:09:28.697 { 00:09:28.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.697 "dma_device_type": 2 00:09:28.697 } 00:09:28.697 ], 00:09:28.697 "driver_specific": {} 00:09:28.697 } 00:09:28.697 ] 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.697 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.957 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.957 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.957 "name": "Existed_Raid", 00:09:28.957 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:28.957 "strip_size_kb": 0, 00:09:28.957 "state": "configuring", 00:09:28.957 "raid_level": "raid1", 00:09:28.957 "superblock": true, 00:09:28.957 "num_base_bdevs": 3, 00:09:28.957 "num_base_bdevs_discovered": 2, 00:09:28.957 "num_base_bdevs_operational": 3, 00:09:28.957 "base_bdevs_list": [ 00:09:28.957 { 00:09:28.957 "name": "BaseBdev1", 00:09:28.957 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:28.957 "is_configured": true, 00:09:28.957 "data_offset": 2048, 00:09:28.957 "data_size": 63488 00:09:28.957 }, 00:09:28.957 { 00:09:28.957 "name": null, 00:09:28.957 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:28.957 "is_configured": false, 00:09:28.957 "data_offset": 0, 00:09:28.957 "data_size": 63488 00:09:28.957 }, 00:09:28.957 { 00:09:28.957 "name": "BaseBdev3", 00:09:28.957 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:28.957 "is_configured": true, 00:09:28.957 "data_offset": 2048, 00:09:28.957 "data_size": 63488 00:09:28.957 } 00:09:28.957 ] 00:09:28.957 }' 00:09:28.957 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.957 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 [2024-10-25 17:50:47.570986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.218 "name": "Existed_Raid", 00:09:29.218 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:29.218 "strip_size_kb": 0, 00:09:29.218 "state": "configuring", 00:09:29.218 "raid_level": "raid1", 00:09:29.218 "superblock": true, 00:09:29.218 "num_base_bdevs": 3, 00:09:29.218 "num_base_bdevs_discovered": 1, 00:09:29.218 "num_base_bdevs_operational": 3, 00:09:29.218 "base_bdevs_list": [ 00:09:29.218 { 00:09:29.218 "name": "BaseBdev1", 00:09:29.218 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:29.218 "is_configured": true, 00:09:29.218 "data_offset": 2048, 00:09:29.218 "data_size": 63488 00:09:29.218 }, 00:09:29.218 { 00:09:29.218 "name": null, 00:09:29.218 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:29.218 "is_configured": false, 00:09:29.218 "data_offset": 0, 00:09:29.218 "data_size": 63488 00:09:29.218 }, 00:09:29.218 { 00:09:29.218 "name": null, 00:09:29.218 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:29.218 "is_configured": false, 00:09:29.218 "data_offset": 0, 00:09:29.218 "data_size": 63488 00:09:29.218 } 00:09:29.218 ] 00:09:29.218 }' 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.218 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.789 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.789 17:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.789 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.789 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.789 17:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.789 [2024-10-25 17:50:48.014327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.789 "name": "Existed_Raid", 00:09:29.789 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:29.789 "strip_size_kb": 0, 00:09:29.789 "state": "configuring", 00:09:29.789 "raid_level": "raid1", 00:09:29.789 "superblock": true, 00:09:29.789 "num_base_bdevs": 3, 00:09:29.789 "num_base_bdevs_discovered": 2, 00:09:29.789 "num_base_bdevs_operational": 3, 00:09:29.789 "base_bdevs_list": [ 00:09:29.789 { 00:09:29.789 "name": "BaseBdev1", 00:09:29.789 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:29.789 "is_configured": true, 00:09:29.789 "data_offset": 2048, 00:09:29.789 "data_size": 63488 00:09:29.789 }, 00:09:29.789 { 00:09:29.789 "name": null, 00:09:29.789 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:29.789 "is_configured": false, 00:09:29.789 "data_offset": 0, 00:09:29.789 "data_size": 63488 00:09:29.789 }, 00:09:29.789 { 00:09:29.789 "name": "BaseBdev3", 00:09:29.789 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:29.789 "is_configured": true, 00:09:29.789 "data_offset": 2048, 00:09:29.789 "data_size": 63488 00:09:29.789 } 00:09:29.789 ] 00:09:29.789 }' 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.789 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.049 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.049 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.049 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.049 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.049 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.309 [2024-10-25 17:50:48.513467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.309 "name": "Existed_Raid", 00:09:30.309 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:30.309 "strip_size_kb": 0, 00:09:30.309 "state": "configuring", 00:09:30.309 "raid_level": "raid1", 00:09:30.309 "superblock": true, 00:09:30.309 "num_base_bdevs": 3, 00:09:30.309 "num_base_bdevs_discovered": 1, 00:09:30.309 "num_base_bdevs_operational": 3, 00:09:30.309 "base_bdevs_list": [ 00:09:30.309 { 00:09:30.309 "name": null, 00:09:30.309 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:30.309 "is_configured": false, 00:09:30.309 "data_offset": 0, 00:09:30.309 "data_size": 63488 00:09:30.309 }, 00:09:30.309 { 00:09:30.309 "name": null, 00:09:30.309 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:30.309 "is_configured": false, 00:09:30.309 "data_offset": 0, 00:09:30.309 "data_size": 63488 00:09:30.309 }, 00:09:30.309 { 00:09:30.309 "name": "BaseBdev3", 00:09:30.309 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:30.309 "is_configured": true, 00:09:30.309 "data_offset": 2048, 00:09:30.309 "data_size": 63488 00:09:30.309 } 00:09:30.309 ] 00:09:30.309 }' 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.309 17:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 [2024-10-25 17:50:49.092173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.880 "name": "Existed_Raid", 00:09:30.880 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:30.880 "strip_size_kb": 0, 00:09:30.880 "state": "configuring", 00:09:30.880 "raid_level": "raid1", 00:09:30.880 "superblock": true, 00:09:30.880 "num_base_bdevs": 3, 00:09:30.880 "num_base_bdevs_discovered": 2, 00:09:30.880 "num_base_bdevs_operational": 3, 00:09:30.880 "base_bdevs_list": [ 00:09:30.880 { 00:09:30.880 "name": null, 00:09:30.880 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:30.880 "is_configured": false, 00:09:30.880 "data_offset": 0, 00:09:30.880 "data_size": 63488 00:09:30.880 }, 00:09:30.880 { 00:09:30.880 "name": "BaseBdev2", 00:09:30.880 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:30.880 "is_configured": true, 00:09:30.880 "data_offset": 2048, 00:09:30.880 "data_size": 63488 00:09:30.880 }, 00:09:30.880 { 00:09:30.880 "name": "BaseBdev3", 00:09:30.880 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:30.880 "is_configured": true, 00:09:30.880 "data_offset": 2048, 00:09:30.880 "data_size": 63488 00:09:30.880 } 00:09:30.880 ] 00:09:30.880 }' 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.880 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.140 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.140 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.140 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.140 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55a6cf1f-3144-41f5-928c-eb5c9dd52714 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 [2024-10-25 17:50:49.683420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:31.400 [2024-10-25 17:50:49.683641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:31.400 [2024-10-25 17:50:49.683654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.400 [2024-10-25 17:50:49.683914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:31.400 [2024-10-25 17:50:49.684083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:31.400 [2024-10-25 17:50:49.684101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:31.400 [2024-10-25 17:50:49.684236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.400 NewBaseBdev 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 [ 00:09:31.400 { 00:09:31.400 "name": "NewBaseBdev", 00:09:31.400 "aliases": [ 00:09:31.400 "55a6cf1f-3144-41f5-928c-eb5c9dd52714" 00:09:31.400 ], 00:09:31.400 "product_name": "Malloc disk", 00:09:31.400 "block_size": 512, 00:09:31.400 "num_blocks": 65536, 00:09:31.400 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:31.400 "assigned_rate_limits": { 00:09:31.400 "rw_ios_per_sec": 0, 00:09:31.400 "rw_mbytes_per_sec": 0, 00:09:31.400 "r_mbytes_per_sec": 0, 00:09:31.400 "w_mbytes_per_sec": 0 00:09:31.400 }, 00:09:31.400 "claimed": true, 00:09:31.400 "claim_type": "exclusive_write", 00:09:31.400 "zoned": false, 00:09:31.400 "supported_io_types": { 00:09:31.400 "read": true, 00:09:31.400 "write": true, 00:09:31.400 "unmap": true, 00:09:31.400 "flush": true, 00:09:31.400 "reset": true, 00:09:31.400 "nvme_admin": false, 00:09:31.400 "nvme_io": false, 00:09:31.400 "nvme_io_md": false, 00:09:31.400 "write_zeroes": true, 00:09:31.400 "zcopy": true, 00:09:31.400 "get_zone_info": false, 00:09:31.400 "zone_management": false, 00:09:31.400 "zone_append": false, 00:09:31.400 "compare": false, 00:09:31.400 "compare_and_write": false, 00:09:31.400 "abort": true, 00:09:31.400 "seek_hole": false, 00:09:31.400 "seek_data": false, 00:09:31.400 "copy": true, 00:09:31.400 "nvme_iov_md": false 00:09:31.400 }, 00:09:31.400 "memory_domains": [ 00:09:31.400 { 00:09:31.400 "dma_device_id": "system", 00:09:31.400 "dma_device_type": 1 00:09:31.400 }, 00:09:31.400 { 00:09:31.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.400 "dma_device_type": 2 00:09:31.400 } 00:09:31.400 ], 00:09:31.400 "driver_specific": {} 00:09:31.400 } 00:09:31.400 ] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.400 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.400 "name": "Existed_Raid", 00:09:31.400 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:31.400 "strip_size_kb": 0, 00:09:31.400 "state": "online", 00:09:31.400 "raid_level": "raid1", 00:09:31.400 "superblock": true, 00:09:31.400 "num_base_bdevs": 3, 00:09:31.400 "num_base_bdevs_discovered": 3, 00:09:31.400 "num_base_bdevs_operational": 3, 00:09:31.400 "base_bdevs_list": [ 00:09:31.400 { 00:09:31.400 "name": "NewBaseBdev", 00:09:31.400 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:31.400 "is_configured": true, 00:09:31.401 "data_offset": 2048, 00:09:31.401 "data_size": 63488 00:09:31.401 }, 00:09:31.401 { 00:09:31.401 "name": "BaseBdev2", 00:09:31.401 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:31.401 "is_configured": true, 00:09:31.401 "data_offset": 2048, 00:09:31.401 "data_size": 63488 00:09:31.401 }, 00:09:31.401 { 00:09:31.401 "name": "BaseBdev3", 00:09:31.401 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:31.401 "is_configured": true, 00:09:31.401 "data_offset": 2048, 00:09:31.401 "data_size": 63488 00:09:31.401 } 00:09:31.401 ] 00:09:31.401 }' 00:09:31.401 17:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.401 17:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.661 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.921 [2024-10-25 17:50:50.103029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.921 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.921 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.921 "name": "Existed_Raid", 00:09:31.921 "aliases": [ 00:09:31.921 "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442" 00:09:31.921 ], 00:09:31.921 "product_name": "Raid Volume", 00:09:31.921 "block_size": 512, 00:09:31.921 "num_blocks": 63488, 00:09:31.921 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:31.921 "assigned_rate_limits": { 00:09:31.921 "rw_ios_per_sec": 0, 00:09:31.921 "rw_mbytes_per_sec": 0, 00:09:31.921 "r_mbytes_per_sec": 0, 00:09:31.921 "w_mbytes_per_sec": 0 00:09:31.921 }, 00:09:31.921 "claimed": false, 00:09:31.921 "zoned": false, 00:09:31.921 "supported_io_types": { 00:09:31.921 "read": true, 00:09:31.921 "write": true, 00:09:31.921 "unmap": false, 00:09:31.921 "flush": false, 00:09:31.921 "reset": true, 00:09:31.921 "nvme_admin": false, 00:09:31.921 "nvme_io": false, 00:09:31.921 "nvme_io_md": false, 00:09:31.921 "write_zeroes": true, 00:09:31.921 "zcopy": false, 00:09:31.921 "get_zone_info": false, 00:09:31.921 "zone_management": false, 00:09:31.921 "zone_append": false, 00:09:31.921 "compare": false, 00:09:31.922 "compare_and_write": false, 00:09:31.922 "abort": false, 00:09:31.922 "seek_hole": false, 00:09:31.922 "seek_data": false, 00:09:31.922 "copy": false, 00:09:31.922 "nvme_iov_md": false 00:09:31.922 }, 00:09:31.922 "memory_domains": [ 00:09:31.922 { 00:09:31.922 "dma_device_id": "system", 00:09:31.922 "dma_device_type": 1 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.922 "dma_device_type": 2 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "dma_device_id": "system", 00:09:31.922 "dma_device_type": 1 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.922 "dma_device_type": 2 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "dma_device_id": "system", 00:09:31.922 "dma_device_type": 1 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.922 "dma_device_type": 2 00:09:31.922 } 00:09:31.922 ], 00:09:31.922 "driver_specific": { 00:09:31.922 "raid": { 00:09:31.922 "uuid": "9f04d90d-fc0c-48cb-8ff3-12aa2bef7442", 00:09:31.922 "strip_size_kb": 0, 00:09:31.922 "state": "online", 00:09:31.922 "raid_level": "raid1", 00:09:31.922 "superblock": true, 00:09:31.922 "num_base_bdevs": 3, 00:09:31.922 "num_base_bdevs_discovered": 3, 00:09:31.922 "num_base_bdevs_operational": 3, 00:09:31.922 "base_bdevs_list": [ 00:09:31.922 { 00:09:31.922 "name": "NewBaseBdev", 00:09:31.922 "uuid": "55a6cf1f-3144-41f5-928c-eb5c9dd52714", 00:09:31.922 "is_configured": true, 00:09:31.922 "data_offset": 2048, 00:09:31.922 "data_size": 63488 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "name": "BaseBdev2", 00:09:31.922 "uuid": "6f4fc055-1af4-4d96-82d5-e43fc0e4b779", 00:09:31.922 "is_configured": true, 00:09:31.922 "data_offset": 2048, 00:09:31.922 "data_size": 63488 00:09:31.922 }, 00:09:31.922 { 00:09:31.922 "name": "BaseBdev3", 00:09:31.922 "uuid": "9b5a3184-10c0-4d5e-aec0-75e8f50301ec", 00:09:31.922 "is_configured": true, 00:09:31.922 "data_offset": 2048, 00:09:31.922 "data_size": 63488 00:09:31.922 } 00:09:31.922 ] 00:09:31.922 } 00:09:31.922 } 00:09:31.922 }' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.922 BaseBdev2 00:09:31.922 BaseBdev3' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.922 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.182 [2024-10-25 17:50:50.370282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.182 [2024-10-25 17:50:50.370314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.182 [2024-10-25 17:50:50.370375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.182 [2024-10-25 17:50:50.370649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.182 [2024-10-25 17:50:50.370667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67776 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67776 ']' 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67776 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67776 00:09:32.182 killing process with pid 67776 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67776' 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67776 00:09:32.182 [2024-10-25 17:50:50.405873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.182 17:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67776 00:09:32.441 [2024-10-25 17:50:50.694336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.425 17:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:33.425 00:09:33.425 real 0m10.192s 00:09:33.425 user 0m16.171s 00:09:33.425 sys 0m1.841s 00:09:33.425 17:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.425 17:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 ************************************ 00:09:33.425 END TEST raid_state_function_test_sb 00:09:33.425 ************************************ 00:09:33.425 17:50:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:33.425 17:50:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:33.425 17:50:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.425 17:50:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 ************************************ 00:09:33.425 START TEST raid_superblock_test 00:09:33.425 ************************************ 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68396 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68396 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68396 ']' 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.425 17:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 [2024-10-25 17:50:51.911722] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:33.686 [2024-10-25 17:50:51.911867] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68396 ] 00:09:33.686 [2024-10-25 17:50:52.090301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.946 [2024-10-25 17:50:52.200526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.206 [2024-10-25 17:50:52.391342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.206 [2024-10-25 17:50:52.391402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 malloc1 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 [2024-10-25 17:50:52.756554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.467 [2024-10-25 17:50:52.756617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.467 [2024-10-25 17:50:52.756641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:34.467 [2024-10-25 17:50:52.756651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.467 [2024-10-25 17:50:52.758761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.467 [2024-10-25 17:50:52.758799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.467 pt1 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 malloc2 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 [2024-10-25 17:50:52.810346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.467 [2024-10-25 17:50:52.810401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.467 [2024-10-25 17:50:52.810425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:34.467 [2024-10-25 17:50:52.810435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.467 [2024-10-25 17:50:52.812560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.467 [2024-10-25 17:50:52.812596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.467 pt2 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 malloc3 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 [2024-10-25 17:50:52.874526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:34.467 [2024-10-25 17:50:52.874573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.467 [2024-10-25 17:50:52.874593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:34.467 [2024-10-25 17:50:52.874602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.467 [2024-10-25 17:50:52.876575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.467 [2024-10-25 17:50:52.876608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:34.467 pt3 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.467 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.467 [2024-10-25 17:50:52.886560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.468 [2024-10-25 17:50:52.888357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.468 [2024-10-25 17:50:52.888440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:34.468 [2024-10-25 17:50:52.888594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:34.468 [2024-10-25 17:50:52.888610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.468 [2024-10-25 17:50:52.888856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.468 [2024-10-25 17:50:52.889040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:34.468 [2024-10-25 17:50:52.889062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:34.468 [2024-10-25 17:50:52.889223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.468 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.728 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.728 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.728 "name": "raid_bdev1", 00:09:34.728 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:34.728 "strip_size_kb": 0, 00:09:34.728 "state": "online", 00:09:34.728 "raid_level": "raid1", 00:09:34.728 "superblock": true, 00:09:34.728 "num_base_bdevs": 3, 00:09:34.728 "num_base_bdevs_discovered": 3, 00:09:34.728 "num_base_bdevs_operational": 3, 00:09:34.728 "base_bdevs_list": [ 00:09:34.728 { 00:09:34.728 "name": "pt1", 00:09:34.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.728 "is_configured": true, 00:09:34.728 "data_offset": 2048, 00:09:34.728 "data_size": 63488 00:09:34.728 }, 00:09:34.728 { 00:09:34.728 "name": "pt2", 00:09:34.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.728 "is_configured": true, 00:09:34.728 "data_offset": 2048, 00:09:34.728 "data_size": 63488 00:09:34.728 }, 00:09:34.728 { 00:09:34.728 "name": "pt3", 00:09:34.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.728 "is_configured": true, 00:09:34.728 "data_offset": 2048, 00:09:34.728 "data_size": 63488 00:09:34.728 } 00:09:34.728 ] 00:09:34.728 }' 00:09:34.728 17:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.728 17:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.989 [2024-10-25 17:50:53.334108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.989 "name": "raid_bdev1", 00:09:34.989 "aliases": [ 00:09:34.989 "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc" 00:09:34.989 ], 00:09:34.989 "product_name": "Raid Volume", 00:09:34.989 "block_size": 512, 00:09:34.989 "num_blocks": 63488, 00:09:34.989 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:34.989 "assigned_rate_limits": { 00:09:34.989 "rw_ios_per_sec": 0, 00:09:34.989 "rw_mbytes_per_sec": 0, 00:09:34.989 "r_mbytes_per_sec": 0, 00:09:34.989 "w_mbytes_per_sec": 0 00:09:34.989 }, 00:09:34.989 "claimed": false, 00:09:34.989 "zoned": false, 00:09:34.989 "supported_io_types": { 00:09:34.989 "read": true, 00:09:34.989 "write": true, 00:09:34.989 "unmap": false, 00:09:34.989 "flush": false, 00:09:34.989 "reset": true, 00:09:34.989 "nvme_admin": false, 00:09:34.989 "nvme_io": false, 00:09:34.989 "nvme_io_md": false, 00:09:34.989 "write_zeroes": true, 00:09:34.989 "zcopy": false, 00:09:34.989 "get_zone_info": false, 00:09:34.989 "zone_management": false, 00:09:34.989 "zone_append": false, 00:09:34.989 "compare": false, 00:09:34.989 "compare_and_write": false, 00:09:34.989 "abort": false, 00:09:34.989 "seek_hole": false, 00:09:34.989 "seek_data": false, 00:09:34.989 "copy": false, 00:09:34.989 "nvme_iov_md": false 00:09:34.989 }, 00:09:34.989 "memory_domains": [ 00:09:34.989 { 00:09:34.989 "dma_device_id": "system", 00:09:34.989 "dma_device_type": 1 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.989 "dma_device_type": 2 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "dma_device_id": "system", 00:09:34.989 "dma_device_type": 1 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.989 "dma_device_type": 2 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "dma_device_id": "system", 00:09:34.989 "dma_device_type": 1 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.989 "dma_device_type": 2 00:09:34.989 } 00:09:34.989 ], 00:09:34.989 "driver_specific": { 00:09:34.989 "raid": { 00:09:34.989 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:34.989 "strip_size_kb": 0, 00:09:34.989 "state": "online", 00:09:34.989 "raid_level": "raid1", 00:09:34.989 "superblock": true, 00:09:34.989 "num_base_bdevs": 3, 00:09:34.989 "num_base_bdevs_discovered": 3, 00:09:34.989 "num_base_bdevs_operational": 3, 00:09:34.989 "base_bdevs_list": [ 00:09:34.989 { 00:09:34.989 "name": "pt1", 00:09:34.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.989 "is_configured": true, 00:09:34.989 "data_offset": 2048, 00:09:34.989 "data_size": 63488 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "name": "pt2", 00:09:34.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.989 "is_configured": true, 00:09:34.989 "data_offset": 2048, 00:09:34.989 "data_size": 63488 00:09:34.989 }, 00:09:34.989 { 00:09:34.989 "name": "pt3", 00:09:34.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.989 "is_configured": true, 00:09:34.989 "data_offset": 2048, 00:09:34.989 "data_size": 63488 00:09:34.989 } 00:09:34.989 ] 00:09:34.989 } 00:09:34.989 } 00:09:34.989 }' 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.989 pt2 00:09:34.989 pt3' 00:09:34.989 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.249 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.250 [2024-10-25 17:50:53.581597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc ']' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.250 [2024-10-25 17:50:53.629275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.250 [2024-10-25 17:50:53.629303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.250 [2024-10-25 17:50:53.629373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.250 [2024-10-25 17:50:53.629441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.250 [2024-10-25 17:50:53.629458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.250 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.510 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.510 [2024-10-25 17:50:53.773085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:35.511 [2024-10-25 17:50:53.774895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:35.511 [2024-10-25 17:50:53.774947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:35.511 [2024-10-25 17:50:53.774997] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:35.511 [2024-10-25 17:50:53.775042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:35.511 [2024-10-25 17:50:53.775059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:35.511 [2024-10-25 17:50:53.775076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.511 [2024-10-25 17:50:53.775084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:35.511 request: 00:09:35.511 { 00:09:35.511 "name": "raid_bdev1", 00:09:35.511 "raid_level": "raid1", 00:09:35.511 "base_bdevs": [ 00:09:35.511 "malloc1", 00:09:35.511 "malloc2", 00:09:35.511 "malloc3" 00:09:35.511 ], 00:09:35.511 "superblock": false, 00:09:35.511 "method": "bdev_raid_create", 00:09:35.511 "req_id": 1 00:09:35.511 } 00:09:35.511 Got JSON-RPC error response 00:09:35.511 response: 00:09:35.511 { 00:09:35.511 "code": -17, 00:09:35.511 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:35.511 } 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.511 [2024-10-25 17:50:53.820974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.511 [2024-10-25 17:50:53.821038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.511 [2024-10-25 17:50:53.821063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:35.511 [2024-10-25 17:50:53.821073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.511 [2024-10-25 17:50:53.823256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.511 [2024-10-25 17:50:53.823290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.511 [2024-10-25 17:50:53.823372] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:35.511 [2024-10-25 17:50:53.823432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.511 pt1 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.511 "name": "raid_bdev1", 00:09:35.511 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:35.511 "strip_size_kb": 0, 00:09:35.511 "state": "configuring", 00:09:35.511 "raid_level": "raid1", 00:09:35.511 "superblock": true, 00:09:35.511 "num_base_bdevs": 3, 00:09:35.511 "num_base_bdevs_discovered": 1, 00:09:35.511 "num_base_bdevs_operational": 3, 00:09:35.511 "base_bdevs_list": [ 00:09:35.511 { 00:09:35.511 "name": "pt1", 00:09:35.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.511 "is_configured": true, 00:09:35.511 "data_offset": 2048, 00:09:35.511 "data_size": 63488 00:09:35.511 }, 00:09:35.511 { 00:09:35.511 "name": null, 00:09:35.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.511 "is_configured": false, 00:09:35.511 "data_offset": 2048, 00:09:35.511 "data_size": 63488 00:09:35.511 }, 00:09:35.511 { 00:09:35.511 "name": null, 00:09:35.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.511 "is_configured": false, 00:09:35.511 "data_offset": 2048, 00:09:35.511 "data_size": 63488 00:09:35.511 } 00:09:35.511 ] 00:09:35.511 }' 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.511 17:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.081 [2024-10-25 17:50:54.300204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.081 [2024-10-25 17:50:54.300270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.081 [2024-10-25 17:50:54.300293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:36.081 [2024-10-25 17:50:54.300302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.081 [2024-10-25 17:50:54.300737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.081 [2024-10-25 17:50:54.300760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.081 [2024-10-25 17:50:54.300858] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:36.081 [2024-10-25 17:50:54.300882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.081 pt2 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.081 [2024-10-25 17:50:54.312165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.081 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.081 "name": "raid_bdev1", 00:09:36.081 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:36.081 "strip_size_kb": 0, 00:09:36.081 "state": "configuring", 00:09:36.081 "raid_level": "raid1", 00:09:36.081 "superblock": true, 00:09:36.081 "num_base_bdevs": 3, 00:09:36.081 "num_base_bdevs_discovered": 1, 00:09:36.081 "num_base_bdevs_operational": 3, 00:09:36.081 "base_bdevs_list": [ 00:09:36.081 { 00:09:36.081 "name": "pt1", 00:09:36.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.081 "is_configured": true, 00:09:36.081 "data_offset": 2048, 00:09:36.081 "data_size": 63488 00:09:36.081 }, 00:09:36.081 { 00:09:36.081 "name": null, 00:09:36.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.081 "is_configured": false, 00:09:36.081 "data_offset": 0, 00:09:36.081 "data_size": 63488 00:09:36.081 }, 00:09:36.081 { 00:09:36.081 "name": null, 00:09:36.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.081 "is_configured": false, 00:09:36.081 "data_offset": 2048, 00:09:36.081 "data_size": 63488 00:09:36.081 } 00:09:36.081 ] 00:09:36.081 }' 00:09:36.082 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.082 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.342 [2024-10-25 17:50:54.735418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.342 [2024-10-25 17:50:54.735474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.342 [2024-10-25 17:50:54.735489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:36.342 [2024-10-25 17:50:54.735499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.342 [2024-10-25 17:50:54.735928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.342 [2024-10-25 17:50:54.735948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.342 [2024-10-25 17:50:54.736020] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:36.342 [2024-10-25 17:50:54.736058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.342 pt2 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.342 [2024-10-25 17:50:54.743402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.342 [2024-10-25 17:50:54.743445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.342 [2024-10-25 17:50:54.743463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:36.342 [2024-10-25 17:50:54.743475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.342 [2024-10-25 17:50:54.743811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.342 [2024-10-25 17:50:54.743846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.342 [2024-10-25 17:50:54.743906] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:36.342 [2024-10-25 17:50:54.743925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.342 [2024-10-25 17:50:54.744049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.342 [2024-10-25 17:50:54.744074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.342 [2024-10-25 17:50:54.744295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:36.342 [2024-10-25 17:50:54.744449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.342 [2024-10-25 17:50:54.744462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:36.342 [2024-10-25 17:50:54.744608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.342 pt3 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.342 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.603 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.603 "name": "raid_bdev1", 00:09:36.603 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:36.603 "strip_size_kb": 0, 00:09:36.603 "state": "online", 00:09:36.603 "raid_level": "raid1", 00:09:36.603 "superblock": true, 00:09:36.603 "num_base_bdevs": 3, 00:09:36.603 "num_base_bdevs_discovered": 3, 00:09:36.603 "num_base_bdevs_operational": 3, 00:09:36.603 "base_bdevs_list": [ 00:09:36.603 { 00:09:36.603 "name": "pt1", 00:09:36.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.603 "is_configured": true, 00:09:36.603 "data_offset": 2048, 00:09:36.603 "data_size": 63488 00:09:36.603 }, 00:09:36.603 { 00:09:36.603 "name": "pt2", 00:09:36.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.603 "is_configured": true, 00:09:36.603 "data_offset": 2048, 00:09:36.603 "data_size": 63488 00:09:36.603 }, 00:09:36.603 { 00:09:36.603 "name": "pt3", 00:09:36.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.603 "is_configured": true, 00:09:36.603 "data_offset": 2048, 00:09:36.603 "data_size": 63488 00:09:36.603 } 00:09:36.603 ] 00:09:36.603 }' 00:09:36.603 17:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.603 17:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.864 [2024-10-25 17:50:55.163018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.864 "name": "raid_bdev1", 00:09:36.864 "aliases": [ 00:09:36.864 "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc" 00:09:36.864 ], 00:09:36.864 "product_name": "Raid Volume", 00:09:36.864 "block_size": 512, 00:09:36.864 "num_blocks": 63488, 00:09:36.864 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:36.864 "assigned_rate_limits": { 00:09:36.864 "rw_ios_per_sec": 0, 00:09:36.864 "rw_mbytes_per_sec": 0, 00:09:36.864 "r_mbytes_per_sec": 0, 00:09:36.864 "w_mbytes_per_sec": 0 00:09:36.864 }, 00:09:36.864 "claimed": false, 00:09:36.864 "zoned": false, 00:09:36.864 "supported_io_types": { 00:09:36.864 "read": true, 00:09:36.864 "write": true, 00:09:36.864 "unmap": false, 00:09:36.864 "flush": false, 00:09:36.864 "reset": true, 00:09:36.864 "nvme_admin": false, 00:09:36.864 "nvme_io": false, 00:09:36.864 "nvme_io_md": false, 00:09:36.864 "write_zeroes": true, 00:09:36.864 "zcopy": false, 00:09:36.864 "get_zone_info": false, 00:09:36.864 "zone_management": false, 00:09:36.864 "zone_append": false, 00:09:36.864 "compare": false, 00:09:36.864 "compare_and_write": false, 00:09:36.864 "abort": false, 00:09:36.864 "seek_hole": false, 00:09:36.864 "seek_data": false, 00:09:36.864 "copy": false, 00:09:36.864 "nvme_iov_md": false 00:09:36.864 }, 00:09:36.864 "memory_domains": [ 00:09:36.864 { 00:09:36.864 "dma_device_id": "system", 00:09:36.864 "dma_device_type": 1 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.864 "dma_device_type": 2 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "dma_device_id": "system", 00:09:36.864 "dma_device_type": 1 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.864 "dma_device_type": 2 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "dma_device_id": "system", 00:09:36.864 "dma_device_type": 1 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.864 "dma_device_type": 2 00:09:36.864 } 00:09:36.864 ], 00:09:36.864 "driver_specific": { 00:09:36.864 "raid": { 00:09:36.864 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:36.864 "strip_size_kb": 0, 00:09:36.864 "state": "online", 00:09:36.864 "raid_level": "raid1", 00:09:36.864 "superblock": true, 00:09:36.864 "num_base_bdevs": 3, 00:09:36.864 "num_base_bdevs_discovered": 3, 00:09:36.864 "num_base_bdevs_operational": 3, 00:09:36.864 "base_bdevs_list": [ 00:09:36.864 { 00:09:36.864 "name": "pt1", 00:09:36.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.864 "is_configured": true, 00:09:36.864 "data_offset": 2048, 00:09:36.864 "data_size": 63488 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "name": "pt2", 00:09:36.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.864 "is_configured": true, 00:09:36.864 "data_offset": 2048, 00:09:36.864 "data_size": 63488 00:09:36.864 }, 00:09:36.864 { 00:09:36.864 "name": "pt3", 00:09:36.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.864 "is_configured": true, 00:09:36.864 "data_offset": 2048, 00:09:36.864 "data_size": 63488 00:09:36.864 } 00:09:36.864 ] 00:09:36.864 } 00:09:36.864 } 00:09:36.864 }' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:36.864 pt2 00:09:36.864 pt3' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.864 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 [2024-10-25 17:50:55.442438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc '!=' 6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc ']' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 [2024-10-25 17:50:55.474183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.125 "name": "raid_bdev1", 00:09:37.125 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:37.125 "strip_size_kb": 0, 00:09:37.125 "state": "online", 00:09:37.125 "raid_level": "raid1", 00:09:37.125 "superblock": true, 00:09:37.125 "num_base_bdevs": 3, 00:09:37.125 "num_base_bdevs_discovered": 2, 00:09:37.125 "num_base_bdevs_operational": 2, 00:09:37.125 "base_bdevs_list": [ 00:09:37.125 { 00:09:37.125 "name": null, 00:09:37.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.125 "is_configured": false, 00:09:37.125 "data_offset": 0, 00:09:37.125 "data_size": 63488 00:09:37.125 }, 00:09:37.125 { 00:09:37.125 "name": "pt2", 00:09:37.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.125 "is_configured": true, 00:09:37.125 "data_offset": 2048, 00:09:37.125 "data_size": 63488 00:09:37.125 }, 00:09:37.125 { 00:09:37.125 "name": "pt3", 00:09:37.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.125 "is_configured": true, 00:09:37.125 "data_offset": 2048, 00:09:37.125 "data_size": 63488 00:09:37.125 } 00:09:37.125 ] 00:09:37.125 }' 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.125 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 [2024-10-25 17:50:55.921355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.696 [2024-10-25 17:50:55.921382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.696 [2024-10-25 17:50:55.921439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.696 [2024-10-25 17:50:55.921488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.696 [2024-10-25 17:50:55.921501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 [2024-10-25 17:50:56.009196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.696 [2024-10-25 17:50:56.009246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.696 [2024-10-25 17:50:56.009261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:37.696 [2024-10-25 17:50:56.009271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.696 [2024-10-25 17:50:56.011357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.696 [2024-10-25 17:50:56.011393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.696 [2024-10-25 17:50:56.011461] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.696 [2024-10-25 17:50:56.011500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.696 pt2 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.696 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.696 "name": "raid_bdev1", 00:09:37.696 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:37.696 "strip_size_kb": 0, 00:09:37.696 "state": "configuring", 00:09:37.696 "raid_level": "raid1", 00:09:37.696 "superblock": true, 00:09:37.696 "num_base_bdevs": 3, 00:09:37.696 "num_base_bdevs_discovered": 1, 00:09:37.696 "num_base_bdevs_operational": 2, 00:09:37.696 "base_bdevs_list": [ 00:09:37.696 { 00:09:37.696 "name": null, 00:09:37.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.697 "is_configured": false, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "name": "pt2", 00:09:37.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.697 "is_configured": true, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 }, 00:09:37.697 { 00:09:37.697 "name": null, 00:09:37.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.697 "is_configured": false, 00:09:37.697 "data_offset": 2048, 00:09:37.697 "data_size": 63488 00:09:37.697 } 00:09:37.697 ] 00:09:37.697 }' 00:09:37.697 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.697 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 [2024-10-25 17:50:56.452507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:38.267 [2024-10-25 17:50:56.452576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.267 [2024-10-25 17:50:56.452597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:38.267 [2024-10-25 17:50:56.452609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.267 [2024-10-25 17:50:56.453066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.267 [2024-10-25 17:50:56.453085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:38.267 [2024-10-25 17:50:56.453173] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:38.267 [2024-10-25 17:50:56.453201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:38.267 [2024-10-25 17:50:56.453322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.267 [2024-10-25 17:50:56.453333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.267 [2024-10-25 17:50:56.453576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:38.267 [2024-10-25 17:50:56.453742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.267 [2024-10-25 17:50:56.453759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.267 [2024-10-25 17:50:56.453907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.267 pt3 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.267 "name": "raid_bdev1", 00:09:38.267 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:38.267 "strip_size_kb": 0, 00:09:38.267 "state": "online", 00:09:38.267 "raid_level": "raid1", 00:09:38.267 "superblock": true, 00:09:38.267 "num_base_bdevs": 3, 00:09:38.267 "num_base_bdevs_discovered": 2, 00:09:38.267 "num_base_bdevs_operational": 2, 00:09:38.267 "base_bdevs_list": [ 00:09:38.267 { 00:09:38.267 "name": null, 00:09:38.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.267 "is_configured": false, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "name": "pt2", 00:09:38.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 }, 00:09:38.267 { 00:09:38.267 "name": "pt3", 00:09:38.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.267 "is_configured": true, 00:09:38.267 "data_offset": 2048, 00:09:38.267 "data_size": 63488 00:09:38.267 } 00:09:38.267 ] 00:09:38.267 }' 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.267 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.527 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.527 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 [2024-10-25 17:50:56.919686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.527 [2024-10-25 17:50:56.919719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.527 [2024-10-25 17:50:56.919789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.528 [2024-10-25 17:50:56.919858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.528 [2024-10-25 17:50:56.919868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.528 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.788 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:38.788 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:38.788 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:38.788 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:38.788 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.789 [2024-10-25 17:50:56.987583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.789 [2024-10-25 17:50:56.987632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.789 [2024-10-25 17:50:56.987653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:38.789 [2024-10-25 17:50:56.987661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.789 [2024-10-25 17:50:56.989768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.789 [2024-10-25 17:50:56.989802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.789 [2024-10-25 17:50:56.989891] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:38.789 [2024-10-25 17:50:56.989935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.789 [2024-10-25 17:50:56.990072] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:38.789 [2024-10-25 17:50:56.990086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.789 [2024-10-25 17:50:56.990102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:38.789 [2024-10-25 17:50:56.990148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.789 pt1 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.789 17:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.789 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.789 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.789 "name": "raid_bdev1", 00:09:38.789 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:38.789 "strip_size_kb": 0, 00:09:38.789 "state": "configuring", 00:09:38.789 "raid_level": "raid1", 00:09:38.789 "superblock": true, 00:09:38.789 "num_base_bdevs": 3, 00:09:38.789 "num_base_bdevs_discovered": 1, 00:09:38.789 "num_base_bdevs_operational": 2, 00:09:38.789 "base_bdevs_list": [ 00:09:38.789 { 00:09:38.789 "name": null, 00:09:38.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.789 "is_configured": false, 00:09:38.789 "data_offset": 2048, 00:09:38.789 "data_size": 63488 00:09:38.789 }, 00:09:38.789 { 00:09:38.789 "name": "pt2", 00:09:38.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.789 "is_configured": true, 00:09:38.789 "data_offset": 2048, 00:09:38.789 "data_size": 63488 00:09:38.789 }, 00:09:38.789 { 00:09:38.789 "name": null, 00:09:38.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.789 "is_configured": false, 00:09:38.789 "data_offset": 2048, 00:09:38.789 "data_size": 63488 00:09:38.789 } 00:09:38.789 ] 00:09:38.789 }' 00:09:38.789 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.789 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.050 [2024-10-25 17:50:57.426848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.050 [2024-10-25 17:50:57.426906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.050 [2024-10-25 17:50:57.426926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:39.050 [2024-10-25 17:50:57.426935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.050 [2024-10-25 17:50:57.427384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.050 [2024-10-25 17:50:57.427404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.050 [2024-10-25 17:50:57.427486] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:39.050 [2024-10-25 17:50:57.427530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.050 [2024-10-25 17:50:57.427659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:39.050 [2024-10-25 17:50:57.427667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.050 [2024-10-25 17:50:57.427923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:39.050 [2024-10-25 17:50:57.428091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:39.050 [2024-10-25 17:50:57.428104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:39.050 [2024-10-25 17:50:57.428234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.050 pt3 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.050 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.311 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.311 "name": "raid_bdev1", 00:09:39.311 "uuid": "6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc", 00:09:39.311 "strip_size_kb": 0, 00:09:39.311 "state": "online", 00:09:39.311 "raid_level": "raid1", 00:09:39.311 "superblock": true, 00:09:39.311 "num_base_bdevs": 3, 00:09:39.311 "num_base_bdevs_discovered": 2, 00:09:39.311 "num_base_bdevs_operational": 2, 00:09:39.311 "base_bdevs_list": [ 00:09:39.311 { 00:09:39.311 "name": null, 00:09:39.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.311 "is_configured": false, 00:09:39.311 "data_offset": 2048, 00:09:39.311 "data_size": 63488 00:09:39.311 }, 00:09:39.311 { 00:09:39.311 "name": "pt2", 00:09:39.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.311 "is_configured": true, 00:09:39.311 "data_offset": 2048, 00:09:39.311 "data_size": 63488 00:09:39.311 }, 00:09:39.311 { 00:09:39.311 "name": "pt3", 00:09:39.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.311 "is_configured": true, 00:09:39.311 "data_offset": 2048, 00:09:39.311 "data_size": 63488 00:09:39.311 } 00:09:39.311 ] 00:09:39.311 }' 00:09:39.311 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.311 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.571 [2024-10-25 17:50:57.906235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc '!=' 6f4e47a9-3d15-4ed4-b2d5-1b8d48de77bc ']' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68396 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68396 ']' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68396 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68396 00:09:39.571 killing process with pid 68396 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68396' 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68396 00:09:39.571 [2024-10-25 17:50:57.982201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.571 [2024-10-25 17:50:57.982288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.571 [2024-10-25 17:50:57.982347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.571 [2024-10-25 17:50:57.982359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:39.571 17:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68396 00:09:40.142 [2024-10-25 17:50:58.272167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.084 17:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:41.084 00:09:41.084 real 0m7.531s 00:09:41.084 user 0m11.756s 00:09:41.084 sys 0m1.407s 00:09:41.084 17:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.084 17:50:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.084 ************************************ 00:09:41.084 END TEST raid_superblock_test 00:09:41.084 ************************************ 00:09:41.084 17:50:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:41.084 17:50:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.084 17:50:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.084 17:50:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.084 ************************************ 00:09:41.084 START TEST raid_read_error_test 00:09:41.084 ************************************ 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5k93WjcYnt 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68842 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68842 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 68842 ']' 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.084 17:50:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.344 [2024-10-25 17:50:59.531219] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:41.344 [2024-10-25 17:50:59.531353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68842 ] 00:09:41.344 [2024-10-25 17:50:59.710431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.604 [2024-10-25 17:50:59.819898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.604 [2024-10-25 17:51:00.020542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.604 [2024-10-25 17:51:00.020584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 BaseBdev1_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 true 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 [2024-10-25 17:51:00.409373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.175 [2024-10-25 17:51:00.409425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.175 [2024-10-25 17:51:00.409443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.175 [2024-10-25 17:51:00.409453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.175 [2024-10-25 17:51:00.411465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.175 [2024-10-25 17:51:00.411501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.175 BaseBdev1 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 BaseBdev2_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 true 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 [2024-10-25 17:51:00.461018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.175 [2024-10-25 17:51:00.461075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.175 [2024-10-25 17:51:00.461091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.175 [2024-10-25 17:51:00.461101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.175 [2024-10-25 17:51:00.463057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.175 [2024-10-25 17:51:00.463091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.175 BaseBdev2 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 BaseBdev3_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 true 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 [2024-10-25 17:51:00.558499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.175 [2024-10-25 17:51:00.558552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.175 [2024-10-25 17:51:00.558568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.175 [2024-10-25 17:51:00.558577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.175 [2024-10-25 17:51:00.560553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.175 [2024-10-25 17:51:00.560594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.175 BaseBdev3 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 [2024-10-25 17:51:00.570546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.175 [2024-10-25 17:51:00.572321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.175 [2024-10-25 17:51:00.572395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.175 [2024-10-25 17:51:00.572599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.175 [2024-10-25 17:51:00.572614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.175 [2024-10-25 17:51:00.572872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:42.175 [2024-10-25 17:51:00.573043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.175 [2024-10-25 17:51:00.573064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:42.175 [2024-10-25 17:51:00.573203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.175 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.437 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.437 "name": "raid_bdev1", 00:09:42.437 "uuid": "d4e0cdf8-eb47-4219-8dc6-2ccdd6f724f6", 00:09:42.437 "strip_size_kb": 0, 00:09:42.437 "state": "online", 00:09:42.437 "raid_level": "raid1", 00:09:42.437 "superblock": true, 00:09:42.437 "num_base_bdevs": 3, 00:09:42.437 "num_base_bdevs_discovered": 3, 00:09:42.437 "num_base_bdevs_operational": 3, 00:09:42.437 "base_bdevs_list": [ 00:09:42.437 { 00:09:42.437 "name": "BaseBdev1", 00:09:42.437 "uuid": "63960dcf-0603-5973-b0d5-14777c1d8def", 00:09:42.437 "is_configured": true, 00:09:42.437 "data_offset": 2048, 00:09:42.437 "data_size": 63488 00:09:42.437 }, 00:09:42.437 { 00:09:42.437 "name": "BaseBdev2", 00:09:42.437 "uuid": "b218ae6d-d3f7-507c-9592-c2e155ae6ddd", 00:09:42.437 "is_configured": true, 00:09:42.437 "data_offset": 2048, 00:09:42.437 "data_size": 63488 00:09:42.437 }, 00:09:42.437 { 00:09:42.437 "name": "BaseBdev3", 00:09:42.437 "uuid": "d7330e77-245d-5016-9776-760824132feb", 00:09:42.437 "is_configured": true, 00:09:42.437 "data_offset": 2048, 00:09:42.437 "data_size": 63488 00:09:42.437 } 00:09:42.437 ] 00:09:42.437 }' 00:09:42.437 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.437 17:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:42.699 17:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:42.699 [2024-10-25 17:51:01.086954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.639 17:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.639 "name": "raid_bdev1", 00:09:43.639 "uuid": "d4e0cdf8-eb47-4219-8dc6-2ccdd6f724f6", 00:09:43.639 "strip_size_kb": 0, 00:09:43.639 "state": "online", 00:09:43.639 "raid_level": "raid1", 00:09:43.639 "superblock": true, 00:09:43.639 "num_base_bdevs": 3, 00:09:43.639 "num_base_bdevs_discovered": 3, 00:09:43.639 "num_base_bdevs_operational": 3, 00:09:43.639 "base_bdevs_list": [ 00:09:43.639 { 00:09:43.639 "name": "BaseBdev1", 00:09:43.639 "uuid": "63960dcf-0603-5973-b0d5-14777c1d8def", 00:09:43.639 "is_configured": true, 00:09:43.639 "data_offset": 2048, 00:09:43.639 "data_size": 63488 00:09:43.639 }, 00:09:43.639 { 00:09:43.639 "name": "BaseBdev2", 00:09:43.639 "uuid": "b218ae6d-d3f7-507c-9592-c2e155ae6ddd", 00:09:43.639 "is_configured": true, 00:09:43.639 "data_offset": 2048, 00:09:43.639 "data_size": 63488 00:09:43.639 }, 00:09:43.639 { 00:09:43.639 "name": "BaseBdev3", 00:09:43.639 "uuid": "d7330e77-245d-5016-9776-760824132feb", 00:09:43.639 "is_configured": true, 00:09:43.639 "data_offset": 2048, 00:09:43.639 "data_size": 63488 00:09:43.639 } 00:09:43.639 ] 00:09:43.639 }' 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.639 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.209 [2024-10-25 17:51:02.410117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.209 [2024-10-25 17:51:02.410153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.209 [2024-10-25 17:51:02.412733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.209 [2024-10-25 17:51:02.412783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.209 [2024-10-25 17:51:02.412893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.209 [2024-10-25 17:51:02.412904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:44.209 { 00:09:44.209 "results": [ 00:09:44.209 { 00:09:44.209 "job": "raid_bdev1", 00:09:44.209 "core_mask": "0x1", 00:09:44.209 "workload": "randrw", 00:09:44.209 "percentage": 50, 00:09:44.209 "status": "finished", 00:09:44.209 "queue_depth": 1, 00:09:44.209 "io_size": 131072, 00:09:44.209 "runtime": 1.324124, 00:09:44.209 "iops": 14025.876730578102, 00:09:44.209 "mibps": 1753.2345913222628, 00:09:44.209 "io_failed": 0, 00:09:44.209 "io_timeout": 0, 00:09:44.209 "avg_latency_us": 68.77561733068609, 00:09:44.209 "min_latency_us": 22.581659388646287, 00:09:44.209 "max_latency_us": 1330.7528384279476 00:09:44.209 } 00:09:44.209 ], 00:09:44.209 "core_count": 1 00:09:44.209 } 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68842 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 68842 ']' 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 68842 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68842 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.209 killing process with pid 68842 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68842' 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 68842 00:09:44.209 [2024-10-25 17:51:02.450506] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.209 17:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 68842 00:09:44.470 [2024-10-25 17:51:02.676391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5k93WjcYnt 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:45.431 00:09:45.431 real 0m4.383s 00:09:45.431 user 0m5.153s 00:09:45.431 sys 0m0.569s 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.431 17:51:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.431 ************************************ 00:09:45.431 END TEST raid_read_error_test 00:09:45.431 ************************************ 00:09:45.431 17:51:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:45.431 17:51:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.431 17:51:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.431 17:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 ************************************ 00:09:45.692 START TEST raid_write_error_test 00:09:45.692 ************************************ 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xPjC9MVZ4x 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68982 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68982 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 68982 ']' 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.692 17:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 [2024-10-25 17:51:03.979436] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:45.692 [2024-10-25 17:51:03.979541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68982 ] 00:09:45.952 [2024-10-25 17:51:04.149926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.952 [2024-10-25 17:51:04.256412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.212 [2024-10-25 17:51:04.452649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.212 [2024-10-25 17:51:04.452715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.471 BaseBdev1_malloc 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.471 true 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.471 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.471 [2024-10-25 17:51:04.885843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.471 [2024-10-25 17:51:04.885947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.471 [2024-10-25 17:51:04.885969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:46.471 [2024-10-25 17:51:04.885980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.471 [2024-10-25 17:51:04.887969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.472 [2024-10-25 17:51:04.888007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.472 BaseBdev1 00:09:46.472 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.472 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.472 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.472 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.472 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.732 BaseBdev2_malloc 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.732 true 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.732 [2024-10-25 17:51:04.951359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.732 [2024-10-25 17:51:04.951409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.732 [2024-10-25 17:51:04.951425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:46.732 [2024-10-25 17:51:04.951435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.732 [2024-10-25 17:51:04.953471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.732 [2024-10-25 17:51:04.953559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.732 BaseBdev2 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.732 17:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.732 BaseBdev3_malloc 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.732 true 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.732 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.733 [2024-10-25 17:51:05.028942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.733 [2024-10-25 17:51:05.028991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.733 [2024-10-25 17:51:05.029007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:46.733 [2024-10-25 17:51:05.029017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.733 [2024-10-25 17:51:05.031010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.733 [2024-10-25 17:51:05.031050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.733 BaseBdev3 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.733 [2024-10-25 17:51:05.040986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.733 [2024-10-25 17:51:05.042706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.733 [2024-10-25 17:51:05.042780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.733 [2024-10-25 17:51:05.042988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:46.733 [2024-10-25 17:51:05.043009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.733 [2024-10-25 17:51:05.043232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.733 [2024-10-25 17:51:05.043389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:46.733 [2024-10-25 17:51:05.043400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:46.733 [2024-10-25 17:51:05.043551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.733 "name": "raid_bdev1", 00:09:46.733 "uuid": "0d3b2552-dd8e-410f-906c-031b1397f142", 00:09:46.733 "strip_size_kb": 0, 00:09:46.733 "state": "online", 00:09:46.733 "raid_level": "raid1", 00:09:46.733 "superblock": true, 00:09:46.733 "num_base_bdevs": 3, 00:09:46.733 "num_base_bdevs_discovered": 3, 00:09:46.733 "num_base_bdevs_operational": 3, 00:09:46.733 "base_bdevs_list": [ 00:09:46.733 { 00:09:46.733 "name": "BaseBdev1", 00:09:46.733 "uuid": "51f23306-e8f4-521e-8951-36c3aff80367", 00:09:46.733 "is_configured": true, 00:09:46.733 "data_offset": 2048, 00:09:46.733 "data_size": 63488 00:09:46.733 }, 00:09:46.733 { 00:09:46.733 "name": "BaseBdev2", 00:09:46.733 "uuid": "7077ed3a-ec50-5328-98db-98d2c07c25b7", 00:09:46.733 "is_configured": true, 00:09:46.733 "data_offset": 2048, 00:09:46.733 "data_size": 63488 00:09:46.733 }, 00:09:46.733 { 00:09:46.733 "name": "BaseBdev3", 00:09:46.733 "uuid": "ef795728-80d6-5b42-98c9-becf9dd41b57", 00:09:46.733 "is_configured": true, 00:09:46.733 "data_offset": 2048, 00:09:46.733 "data_size": 63488 00:09:46.733 } 00:09:46.733 ] 00:09:46.733 }' 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.733 17:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.304 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:47.304 17:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.304 [2024-10-25 17:51:05.545458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:48.243 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:48.243 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.243 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.243 [2024-10-25 17:51:06.464237] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:48.243 [2024-10-25 17:51:06.464295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.243 [2024-10-25 17:51:06.464515] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.244 "name": "raid_bdev1", 00:09:48.244 "uuid": "0d3b2552-dd8e-410f-906c-031b1397f142", 00:09:48.244 "strip_size_kb": 0, 00:09:48.244 "state": "online", 00:09:48.244 "raid_level": "raid1", 00:09:48.244 "superblock": true, 00:09:48.244 "num_base_bdevs": 3, 00:09:48.244 "num_base_bdevs_discovered": 2, 00:09:48.244 "num_base_bdevs_operational": 2, 00:09:48.244 "base_bdevs_list": [ 00:09:48.244 { 00:09:48.244 "name": null, 00:09:48.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.244 "is_configured": false, 00:09:48.244 "data_offset": 0, 00:09:48.244 "data_size": 63488 00:09:48.244 }, 00:09:48.244 { 00:09:48.244 "name": "BaseBdev2", 00:09:48.244 "uuid": "7077ed3a-ec50-5328-98db-98d2c07c25b7", 00:09:48.244 "is_configured": true, 00:09:48.244 "data_offset": 2048, 00:09:48.244 "data_size": 63488 00:09:48.244 }, 00:09:48.244 { 00:09:48.244 "name": "BaseBdev3", 00:09:48.244 "uuid": "ef795728-80d6-5b42-98c9-becf9dd41b57", 00:09:48.244 "is_configured": true, 00:09:48.244 "data_offset": 2048, 00:09:48.244 "data_size": 63488 00:09:48.244 } 00:09:48.244 ] 00:09:48.244 }' 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.244 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.504 [2024-10-25 17:51:06.922338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.504 [2024-10-25 17:51:06.922426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.504 [2024-10-25 17:51:06.925125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.504 [2024-10-25 17:51:06.925225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.504 [2024-10-25 17:51:06.925325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.504 [2024-10-25 17:51:06.925379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.504 { 00:09:48.504 "results": [ 00:09:48.504 { 00:09:48.504 "job": "raid_bdev1", 00:09:48.504 "core_mask": "0x1", 00:09:48.504 "workload": "randrw", 00:09:48.504 "percentage": 50, 00:09:48.504 "status": "finished", 00:09:48.504 "queue_depth": 1, 00:09:48.504 "io_size": 131072, 00:09:48.504 "runtime": 1.37788, 00:09:48.504 "iops": 15732.13922837983, 00:09:48.504 "mibps": 1966.5174035474788, 00:09:48.504 "io_failed": 0, 00:09:48.504 "io_timeout": 0, 00:09:48.504 "avg_latency_us": 61.094140832665694, 00:09:48.504 "min_latency_us": 22.246288209606988, 00:09:48.504 "max_latency_us": 1430.9170305676855 00:09:48.504 } 00:09:48.504 ], 00:09:48.504 "core_count": 1 00:09:48.504 } 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68982 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 68982 ']' 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 68982 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:48.504 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68982 00:09:48.764 killing process with pid 68982 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68982' 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 68982 00:09:48.764 17:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 68982 00:09:48.764 [2024-10-25 17:51:06.969584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.764 [2024-10-25 17:51:07.180262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xPjC9MVZ4x 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:50.145 ************************************ 00:09:50.145 END TEST raid_write_error_test 00:09:50.145 ************************************ 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:50.145 00:09:50.145 real 0m4.416s 00:09:50.145 user 0m5.237s 00:09:50.145 sys 0m0.582s 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.145 17:51:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.145 17:51:08 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:50.145 17:51:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:50.145 17:51:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:50.145 17:51:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:50.145 17:51:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.145 17:51:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.145 ************************************ 00:09:50.145 START TEST raid_state_function_test 00:09:50.145 ************************************ 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.145 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69120 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69120' 00:09:50.146 Process raid pid: 69120 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69120 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69120 ']' 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.146 17:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.146 [2024-10-25 17:51:08.458258] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:50.146 [2024-10-25 17:51:08.458437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.405 [2024-10-25 17:51:08.635005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.405 [2024-10-25 17:51:08.744794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.665 [2024-10-25 17:51:08.939386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.665 [2024-10-25 17:51:08.939532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.924 [2024-10-25 17:51:09.279025] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.924 [2024-10-25 17:51:09.279081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.924 [2024-10-25 17:51:09.279092] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.924 [2024-10-25 17:51:09.279101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.924 [2024-10-25 17:51:09.279107] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.924 [2024-10-25 17:51:09.279116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.924 [2024-10-25 17:51:09.279121] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:50.924 [2024-10-25 17:51:09.279129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.924 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.924 "name": "Existed_Raid", 00:09:50.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.924 "strip_size_kb": 64, 00:09:50.924 "state": "configuring", 00:09:50.924 "raid_level": "raid0", 00:09:50.924 "superblock": false, 00:09:50.924 "num_base_bdevs": 4, 00:09:50.924 "num_base_bdevs_discovered": 0, 00:09:50.924 "num_base_bdevs_operational": 4, 00:09:50.924 "base_bdevs_list": [ 00:09:50.924 { 00:09:50.924 "name": "BaseBdev1", 00:09:50.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.924 "is_configured": false, 00:09:50.924 "data_offset": 0, 00:09:50.924 "data_size": 0 00:09:50.924 }, 00:09:50.924 { 00:09:50.924 "name": "BaseBdev2", 00:09:50.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.925 "is_configured": false, 00:09:50.925 "data_offset": 0, 00:09:50.925 "data_size": 0 00:09:50.925 }, 00:09:50.925 { 00:09:50.925 "name": "BaseBdev3", 00:09:50.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.925 "is_configured": false, 00:09:50.925 "data_offset": 0, 00:09:50.925 "data_size": 0 00:09:50.925 }, 00:09:50.925 { 00:09:50.925 "name": "BaseBdev4", 00:09:50.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.925 "is_configured": false, 00:09:50.925 "data_offset": 0, 00:09:50.925 "data_size": 0 00:09:50.925 } 00:09:50.925 ] 00:09:50.925 }' 00:09:50.925 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.925 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.493 [2024-10-25 17:51:09.678292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.493 [2024-10-25 17:51:09.678336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.493 [2024-10-25 17:51:09.690255] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.493 [2024-10-25 17:51:09.690293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.493 [2024-10-25 17:51:09.690302] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.493 [2024-10-25 17:51:09.690310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.493 [2024-10-25 17:51:09.690316] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.493 [2024-10-25 17:51:09.690323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.493 [2024-10-25 17:51:09.690329] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:51.493 [2024-10-25 17:51:09.690337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.493 [2024-10-25 17:51:09.736134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.493 BaseBdev1 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:51.493 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.494 [ 00:09:51.494 { 00:09:51.494 "name": "BaseBdev1", 00:09:51.494 "aliases": [ 00:09:51.494 "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c" 00:09:51.494 ], 00:09:51.494 "product_name": "Malloc disk", 00:09:51.494 "block_size": 512, 00:09:51.494 "num_blocks": 65536, 00:09:51.494 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:51.494 "assigned_rate_limits": { 00:09:51.494 "rw_ios_per_sec": 0, 00:09:51.494 "rw_mbytes_per_sec": 0, 00:09:51.494 "r_mbytes_per_sec": 0, 00:09:51.494 "w_mbytes_per_sec": 0 00:09:51.494 }, 00:09:51.494 "claimed": true, 00:09:51.494 "claim_type": "exclusive_write", 00:09:51.494 "zoned": false, 00:09:51.494 "supported_io_types": { 00:09:51.494 "read": true, 00:09:51.494 "write": true, 00:09:51.494 "unmap": true, 00:09:51.494 "flush": true, 00:09:51.494 "reset": true, 00:09:51.494 "nvme_admin": false, 00:09:51.494 "nvme_io": false, 00:09:51.494 "nvme_io_md": false, 00:09:51.494 "write_zeroes": true, 00:09:51.494 "zcopy": true, 00:09:51.494 "get_zone_info": false, 00:09:51.494 "zone_management": false, 00:09:51.494 "zone_append": false, 00:09:51.494 "compare": false, 00:09:51.494 "compare_and_write": false, 00:09:51.494 "abort": true, 00:09:51.494 "seek_hole": false, 00:09:51.494 "seek_data": false, 00:09:51.494 "copy": true, 00:09:51.494 "nvme_iov_md": false 00:09:51.494 }, 00:09:51.494 "memory_domains": [ 00:09:51.494 { 00:09:51.494 "dma_device_id": "system", 00:09:51.494 "dma_device_type": 1 00:09:51.494 }, 00:09:51.494 { 00:09:51.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.494 "dma_device_type": 2 00:09:51.494 } 00:09:51.494 ], 00:09:51.494 "driver_specific": {} 00:09:51.494 } 00:09:51.494 ] 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.494 "name": "Existed_Raid", 00:09:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.494 "strip_size_kb": 64, 00:09:51.494 "state": "configuring", 00:09:51.494 "raid_level": "raid0", 00:09:51.494 "superblock": false, 00:09:51.494 "num_base_bdevs": 4, 00:09:51.494 "num_base_bdevs_discovered": 1, 00:09:51.494 "num_base_bdevs_operational": 4, 00:09:51.494 "base_bdevs_list": [ 00:09:51.494 { 00:09:51.494 "name": "BaseBdev1", 00:09:51.494 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:51.494 "is_configured": true, 00:09:51.494 "data_offset": 0, 00:09:51.494 "data_size": 65536 00:09:51.494 }, 00:09:51.494 { 00:09:51.494 "name": "BaseBdev2", 00:09:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.494 "is_configured": false, 00:09:51.494 "data_offset": 0, 00:09:51.494 "data_size": 0 00:09:51.494 }, 00:09:51.494 { 00:09:51.494 "name": "BaseBdev3", 00:09:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.494 "is_configured": false, 00:09:51.494 "data_offset": 0, 00:09:51.494 "data_size": 0 00:09:51.494 }, 00:09:51.494 { 00:09:51.494 "name": "BaseBdev4", 00:09:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.494 "is_configured": false, 00:09:51.494 "data_offset": 0, 00:09:51.494 "data_size": 0 00:09:51.494 } 00:09:51.494 ] 00:09:51.494 }' 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.494 17:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.753 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.753 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.753 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.013 [2024-10-25 17:51:10.195384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.013 [2024-10-25 17:51:10.195443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.013 [2024-10-25 17:51:10.203418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.013 [2024-10-25 17:51:10.205213] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.013 [2024-10-25 17:51:10.205254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.013 [2024-10-25 17:51:10.205264] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.013 [2024-10-25 17:51:10.205287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.013 [2024-10-25 17:51:10.205310] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:52.013 [2024-10-25 17:51:10.205318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.013 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.014 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.014 "name": "Existed_Raid", 00:09:52.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.014 "strip_size_kb": 64, 00:09:52.014 "state": "configuring", 00:09:52.014 "raid_level": "raid0", 00:09:52.014 "superblock": false, 00:09:52.014 "num_base_bdevs": 4, 00:09:52.014 "num_base_bdevs_discovered": 1, 00:09:52.014 "num_base_bdevs_operational": 4, 00:09:52.014 "base_bdevs_list": [ 00:09:52.014 { 00:09:52.014 "name": "BaseBdev1", 00:09:52.014 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:52.014 "is_configured": true, 00:09:52.014 "data_offset": 0, 00:09:52.014 "data_size": 65536 00:09:52.014 }, 00:09:52.014 { 00:09:52.014 "name": "BaseBdev2", 00:09:52.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.014 "is_configured": false, 00:09:52.014 "data_offset": 0, 00:09:52.014 "data_size": 0 00:09:52.014 }, 00:09:52.014 { 00:09:52.014 "name": "BaseBdev3", 00:09:52.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.014 "is_configured": false, 00:09:52.014 "data_offset": 0, 00:09:52.014 "data_size": 0 00:09:52.014 }, 00:09:52.014 { 00:09:52.014 "name": "BaseBdev4", 00:09:52.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.014 "is_configured": false, 00:09:52.014 "data_offset": 0, 00:09:52.014 "data_size": 0 00:09:52.014 } 00:09:52.014 ] 00:09:52.014 }' 00:09:52.014 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.014 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.273 [2024-10-25 17:51:10.671164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.273 BaseBdev2 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.273 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.273 [ 00:09:52.273 { 00:09:52.273 "name": "BaseBdev2", 00:09:52.273 "aliases": [ 00:09:52.273 "3f1e8327-1aa8-408c-a05f-4a15a750b460" 00:09:52.273 ], 00:09:52.273 "product_name": "Malloc disk", 00:09:52.273 "block_size": 512, 00:09:52.273 "num_blocks": 65536, 00:09:52.273 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:52.273 "assigned_rate_limits": { 00:09:52.273 "rw_ios_per_sec": 0, 00:09:52.273 "rw_mbytes_per_sec": 0, 00:09:52.273 "r_mbytes_per_sec": 0, 00:09:52.273 "w_mbytes_per_sec": 0 00:09:52.273 }, 00:09:52.273 "claimed": true, 00:09:52.273 "claim_type": "exclusive_write", 00:09:52.273 "zoned": false, 00:09:52.273 "supported_io_types": { 00:09:52.273 "read": true, 00:09:52.273 "write": true, 00:09:52.273 "unmap": true, 00:09:52.273 "flush": true, 00:09:52.273 "reset": true, 00:09:52.273 "nvme_admin": false, 00:09:52.273 "nvme_io": false, 00:09:52.273 "nvme_io_md": false, 00:09:52.273 "write_zeroes": true, 00:09:52.273 "zcopy": true, 00:09:52.273 "get_zone_info": false, 00:09:52.273 "zone_management": false, 00:09:52.273 "zone_append": false, 00:09:52.273 "compare": false, 00:09:52.273 "compare_and_write": false, 00:09:52.273 "abort": true, 00:09:52.273 "seek_hole": false, 00:09:52.273 "seek_data": false, 00:09:52.273 "copy": true, 00:09:52.273 "nvme_iov_md": false 00:09:52.273 }, 00:09:52.273 "memory_domains": [ 00:09:52.273 { 00:09:52.273 "dma_device_id": "system", 00:09:52.273 "dma_device_type": 1 00:09:52.273 }, 00:09:52.273 { 00:09:52.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.273 "dma_device_type": 2 00:09:52.273 } 00:09:52.274 ], 00:09:52.274 "driver_specific": {} 00:09:52.274 } 00:09:52.274 ] 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.534 "name": "Existed_Raid", 00:09:52.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.534 "strip_size_kb": 64, 00:09:52.534 "state": "configuring", 00:09:52.534 "raid_level": "raid0", 00:09:52.534 "superblock": false, 00:09:52.534 "num_base_bdevs": 4, 00:09:52.534 "num_base_bdevs_discovered": 2, 00:09:52.534 "num_base_bdevs_operational": 4, 00:09:52.534 "base_bdevs_list": [ 00:09:52.534 { 00:09:52.534 "name": "BaseBdev1", 00:09:52.534 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:52.534 "is_configured": true, 00:09:52.534 "data_offset": 0, 00:09:52.534 "data_size": 65536 00:09:52.534 }, 00:09:52.534 { 00:09:52.534 "name": "BaseBdev2", 00:09:52.534 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:52.534 "is_configured": true, 00:09:52.534 "data_offset": 0, 00:09:52.534 "data_size": 65536 00:09:52.534 }, 00:09:52.534 { 00:09:52.534 "name": "BaseBdev3", 00:09:52.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.534 "is_configured": false, 00:09:52.534 "data_offset": 0, 00:09:52.534 "data_size": 0 00:09:52.534 }, 00:09:52.534 { 00:09:52.534 "name": "BaseBdev4", 00:09:52.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.534 "is_configured": false, 00:09:52.534 "data_offset": 0, 00:09:52.534 "data_size": 0 00:09:52.534 } 00:09:52.534 ] 00:09:52.534 }' 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.534 17:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.795 [2024-10-25 17:51:11.174976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.795 BaseBdev3 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.795 [ 00:09:52.795 { 00:09:52.795 "name": "BaseBdev3", 00:09:52.795 "aliases": [ 00:09:52.795 "09bd6df6-6406-4792-ae26-6ed62b6ed695" 00:09:52.795 ], 00:09:52.795 "product_name": "Malloc disk", 00:09:52.795 "block_size": 512, 00:09:52.795 "num_blocks": 65536, 00:09:52.795 "uuid": "09bd6df6-6406-4792-ae26-6ed62b6ed695", 00:09:52.795 "assigned_rate_limits": { 00:09:52.795 "rw_ios_per_sec": 0, 00:09:52.795 "rw_mbytes_per_sec": 0, 00:09:52.795 "r_mbytes_per_sec": 0, 00:09:52.795 "w_mbytes_per_sec": 0 00:09:52.795 }, 00:09:52.795 "claimed": true, 00:09:52.795 "claim_type": "exclusive_write", 00:09:52.795 "zoned": false, 00:09:52.795 "supported_io_types": { 00:09:52.795 "read": true, 00:09:52.795 "write": true, 00:09:52.795 "unmap": true, 00:09:52.795 "flush": true, 00:09:52.795 "reset": true, 00:09:52.795 "nvme_admin": false, 00:09:52.795 "nvme_io": false, 00:09:52.795 "nvme_io_md": false, 00:09:52.795 "write_zeroes": true, 00:09:52.795 "zcopy": true, 00:09:52.795 "get_zone_info": false, 00:09:52.795 "zone_management": false, 00:09:52.795 "zone_append": false, 00:09:52.795 "compare": false, 00:09:52.795 "compare_and_write": false, 00:09:52.795 "abort": true, 00:09:52.795 "seek_hole": false, 00:09:52.795 "seek_data": false, 00:09:52.795 "copy": true, 00:09:52.795 "nvme_iov_md": false 00:09:52.795 }, 00:09:52.795 "memory_domains": [ 00:09:52.795 { 00:09:52.795 "dma_device_id": "system", 00:09:52.795 "dma_device_type": 1 00:09:52.795 }, 00:09:52.795 { 00:09:52.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.795 "dma_device_type": 2 00:09:52.795 } 00:09:52.795 ], 00:09:52.795 "driver_specific": {} 00:09:52.795 } 00:09:52.795 ] 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.795 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.054 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.054 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.054 "name": "Existed_Raid", 00:09:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.054 "strip_size_kb": 64, 00:09:53.054 "state": "configuring", 00:09:53.054 "raid_level": "raid0", 00:09:53.054 "superblock": false, 00:09:53.054 "num_base_bdevs": 4, 00:09:53.054 "num_base_bdevs_discovered": 3, 00:09:53.054 "num_base_bdevs_operational": 4, 00:09:53.054 "base_bdevs_list": [ 00:09:53.054 { 00:09:53.054 "name": "BaseBdev1", 00:09:53.054 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:53.054 "is_configured": true, 00:09:53.054 "data_offset": 0, 00:09:53.054 "data_size": 65536 00:09:53.054 }, 00:09:53.054 { 00:09:53.054 "name": "BaseBdev2", 00:09:53.054 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:53.054 "is_configured": true, 00:09:53.054 "data_offset": 0, 00:09:53.054 "data_size": 65536 00:09:53.054 }, 00:09:53.054 { 00:09:53.054 "name": "BaseBdev3", 00:09:53.054 "uuid": "09bd6df6-6406-4792-ae26-6ed62b6ed695", 00:09:53.054 "is_configured": true, 00:09:53.054 "data_offset": 0, 00:09:53.054 "data_size": 65536 00:09:53.054 }, 00:09:53.054 { 00:09:53.054 "name": "BaseBdev4", 00:09:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.054 "is_configured": false, 00:09:53.054 "data_offset": 0, 00:09:53.054 "data_size": 0 00:09:53.054 } 00:09:53.054 ] 00:09:53.054 }' 00:09:53.054 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.054 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.314 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.315 [2024-10-25 17:51:11.659119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:53.315 [2024-10-25 17:51:11.659168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.315 [2024-10-25 17:51:11.659177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:53.315 [2024-10-25 17:51:11.659443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:53.315 [2024-10-25 17:51:11.659640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.315 [2024-10-25 17:51:11.659660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:53.315 [2024-10-25 17:51:11.659946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.315 BaseBdev4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.315 [ 00:09:53.315 { 00:09:53.315 "name": "BaseBdev4", 00:09:53.315 "aliases": [ 00:09:53.315 "1dda0289-8fa3-4c41-9602-1592ff307d49" 00:09:53.315 ], 00:09:53.315 "product_name": "Malloc disk", 00:09:53.315 "block_size": 512, 00:09:53.315 "num_blocks": 65536, 00:09:53.315 "uuid": "1dda0289-8fa3-4c41-9602-1592ff307d49", 00:09:53.315 "assigned_rate_limits": { 00:09:53.315 "rw_ios_per_sec": 0, 00:09:53.315 "rw_mbytes_per_sec": 0, 00:09:53.315 "r_mbytes_per_sec": 0, 00:09:53.315 "w_mbytes_per_sec": 0 00:09:53.315 }, 00:09:53.315 "claimed": true, 00:09:53.315 "claim_type": "exclusive_write", 00:09:53.315 "zoned": false, 00:09:53.315 "supported_io_types": { 00:09:53.315 "read": true, 00:09:53.315 "write": true, 00:09:53.315 "unmap": true, 00:09:53.315 "flush": true, 00:09:53.315 "reset": true, 00:09:53.315 "nvme_admin": false, 00:09:53.315 "nvme_io": false, 00:09:53.315 "nvme_io_md": false, 00:09:53.315 "write_zeroes": true, 00:09:53.315 "zcopy": true, 00:09:53.315 "get_zone_info": false, 00:09:53.315 "zone_management": false, 00:09:53.315 "zone_append": false, 00:09:53.315 "compare": false, 00:09:53.315 "compare_and_write": false, 00:09:53.315 "abort": true, 00:09:53.315 "seek_hole": false, 00:09:53.315 "seek_data": false, 00:09:53.315 "copy": true, 00:09:53.315 "nvme_iov_md": false 00:09:53.315 }, 00:09:53.315 "memory_domains": [ 00:09:53.315 { 00:09:53.315 "dma_device_id": "system", 00:09:53.315 "dma_device_type": 1 00:09:53.315 }, 00:09:53.315 { 00:09:53.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.315 "dma_device_type": 2 00:09:53.315 } 00:09:53.315 ], 00:09:53.315 "driver_specific": {} 00:09:53.315 } 00:09:53.315 ] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.315 "name": "Existed_Raid", 00:09:53.315 "uuid": "9b779aa0-16df-48d9-9e87-bbdf58a51c6d", 00:09:53.315 "strip_size_kb": 64, 00:09:53.315 "state": "online", 00:09:53.315 "raid_level": "raid0", 00:09:53.315 "superblock": false, 00:09:53.315 "num_base_bdevs": 4, 00:09:53.315 "num_base_bdevs_discovered": 4, 00:09:53.315 "num_base_bdevs_operational": 4, 00:09:53.315 "base_bdevs_list": [ 00:09:53.315 { 00:09:53.315 "name": "BaseBdev1", 00:09:53.315 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:53.315 "is_configured": true, 00:09:53.315 "data_offset": 0, 00:09:53.315 "data_size": 65536 00:09:53.315 }, 00:09:53.315 { 00:09:53.315 "name": "BaseBdev2", 00:09:53.315 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:53.315 "is_configured": true, 00:09:53.315 "data_offset": 0, 00:09:53.315 "data_size": 65536 00:09:53.315 }, 00:09:53.315 { 00:09:53.315 "name": "BaseBdev3", 00:09:53.315 "uuid": "09bd6df6-6406-4792-ae26-6ed62b6ed695", 00:09:53.315 "is_configured": true, 00:09:53.315 "data_offset": 0, 00:09:53.315 "data_size": 65536 00:09:53.315 }, 00:09:53.315 { 00:09:53.315 "name": "BaseBdev4", 00:09:53.315 "uuid": "1dda0289-8fa3-4c41-9602-1592ff307d49", 00:09:53.315 "is_configured": true, 00:09:53.315 "data_offset": 0, 00:09:53.315 "data_size": 65536 00:09:53.315 } 00:09:53.315 ] 00:09:53.315 }' 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.315 17:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.885 [2024-10-25 17:51:12.106695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.885 "name": "Existed_Raid", 00:09:53.885 "aliases": [ 00:09:53.885 "9b779aa0-16df-48d9-9e87-bbdf58a51c6d" 00:09:53.885 ], 00:09:53.885 "product_name": "Raid Volume", 00:09:53.885 "block_size": 512, 00:09:53.885 "num_blocks": 262144, 00:09:53.885 "uuid": "9b779aa0-16df-48d9-9e87-bbdf58a51c6d", 00:09:53.885 "assigned_rate_limits": { 00:09:53.885 "rw_ios_per_sec": 0, 00:09:53.885 "rw_mbytes_per_sec": 0, 00:09:53.885 "r_mbytes_per_sec": 0, 00:09:53.885 "w_mbytes_per_sec": 0 00:09:53.885 }, 00:09:53.885 "claimed": false, 00:09:53.885 "zoned": false, 00:09:53.885 "supported_io_types": { 00:09:53.885 "read": true, 00:09:53.885 "write": true, 00:09:53.885 "unmap": true, 00:09:53.885 "flush": true, 00:09:53.885 "reset": true, 00:09:53.885 "nvme_admin": false, 00:09:53.885 "nvme_io": false, 00:09:53.885 "nvme_io_md": false, 00:09:53.885 "write_zeroes": true, 00:09:53.885 "zcopy": false, 00:09:53.885 "get_zone_info": false, 00:09:53.885 "zone_management": false, 00:09:53.885 "zone_append": false, 00:09:53.885 "compare": false, 00:09:53.885 "compare_and_write": false, 00:09:53.885 "abort": false, 00:09:53.885 "seek_hole": false, 00:09:53.885 "seek_data": false, 00:09:53.885 "copy": false, 00:09:53.885 "nvme_iov_md": false 00:09:53.885 }, 00:09:53.885 "memory_domains": [ 00:09:53.885 { 00:09:53.885 "dma_device_id": "system", 00:09:53.885 "dma_device_type": 1 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.885 "dma_device_type": 2 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "system", 00:09:53.885 "dma_device_type": 1 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.885 "dma_device_type": 2 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "system", 00:09:53.885 "dma_device_type": 1 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.885 "dma_device_type": 2 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "system", 00:09:53.885 "dma_device_type": 1 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.885 "dma_device_type": 2 00:09:53.885 } 00:09:53.885 ], 00:09:53.885 "driver_specific": { 00:09:53.885 "raid": { 00:09:53.885 "uuid": "9b779aa0-16df-48d9-9e87-bbdf58a51c6d", 00:09:53.885 "strip_size_kb": 64, 00:09:53.885 "state": "online", 00:09:53.885 "raid_level": "raid0", 00:09:53.885 "superblock": false, 00:09:53.885 "num_base_bdevs": 4, 00:09:53.885 "num_base_bdevs_discovered": 4, 00:09:53.885 "num_base_bdevs_operational": 4, 00:09:53.885 "base_bdevs_list": [ 00:09:53.885 { 00:09:53.885 "name": "BaseBdev1", 00:09:53.885 "uuid": "7ea9e3c2-fd82-4a09-93bf-5e22366c6d8c", 00:09:53.885 "is_configured": true, 00:09:53.885 "data_offset": 0, 00:09:53.885 "data_size": 65536 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "name": "BaseBdev2", 00:09:53.885 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:53.885 "is_configured": true, 00:09:53.885 "data_offset": 0, 00:09:53.885 "data_size": 65536 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "name": "BaseBdev3", 00:09:53.885 "uuid": "09bd6df6-6406-4792-ae26-6ed62b6ed695", 00:09:53.885 "is_configured": true, 00:09:53.885 "data_offset": 0, 00:09:53.885 "data_size": 65536 00:09:53.885 }, 00:09:53.885 { 00:09:53.885 "name": "BaseBdev4", 00:09:53.885 "uuid": "1dda0289-8fa3-4c41-9602-1592ff307d49", 00:09:53.885 "is_configured": true, 00:09:53.885 "data_offset": 0, 00:09:53.885 "data_size": 65536 00:09:53.885 } 00:09:53.885 ] 00:09:53.885 } 00:09:53.885 } 00:09:53.885 }' 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.885 BaseBdev2 00:09:53.885 BaseBdev3 00:09:53.885 BaseBdev4' 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.885 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.886 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.146 [2024-10-25 17:51:12.413964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.146 [2024-10-25 17:51:12.413995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.146 [2024-10-25 17:51:12.414042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.146 "name": "Existed_Raid", 00:09:54.146 "uuid": "9b779aa0-16df-48d9-9e87-bbdf58a51c6d", 00:09:54.146 "strip_size_kb": 64, 00:09:54.146 "state": "offline", 00:09:54.146 "raid_level": "raid0", 00:09:54.146 "superblock": false, 00:09:54.146 "num_base_bdevs": 4, 00:09:54.146 "num_base_bdevs_discovered": 3, 00:09:54.146 "num_base_bdevs_operational": 3, 00:09:54.146 "base_bdevs_list": [ 00:09:54.146 { 00:09:54.146 "name": null, 00:09:54.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.146 "is_configured": false, 00:09:54.146 "data_offset": 0, 00:09:54.146 "data_size": 65536 00:09:54.146 }, 00:09:54.146 { 00:09:54.146 "name": "BaseBdev2", 00:09:54.146 "uuid": "3f1e8327-1aa8-408c-a05f-4a15a750b460", 00:09:54.146 "is_configured": true, 00:09:54.146 "data_offset": 0, 00:09:54.146 "data_size": 65536 00:09:54.146 }, 00:09:54.146 { 00:09:54.146 "name": "BaseBdev3", 00:09:54.146 "uuid": "09bd6df6-6406-4792-ae26-6ed62b6ed695", 00:09:54.146 "is_configured": true, 00:09:54.146 "data_offset": 0, 00:09:54.146 "data_size": 65536 00:09:54.146 }, 00:09:54.146 { 00:09:54.146 "name": "BaseBdev4", 00:09:54.146 "uuid": "1dda0289-8fa3-4c41-9602-1592ff307d49", 00:09:54.146 "is_configured": true, 00:09:54.146 "data_offset": 0, 00:09:54.146 "data_size": 65536 00:09:54.146 } 00:09:54.146 ] 00:09:54.146 }' 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.146 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.716 17:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.717 [2024-10-25 17:51:12.995141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.717 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.717 [2024-10-25 17:51:13.139941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.977 [2024-10-25 17:51:13.288207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:54.977 [2024-10-25 17:51:13.288261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.977 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.238 BaseBdev2 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.238 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.238 [ 00:09:55.238 { 00:09:55.238 "name": "BaseBdev2", 00:09:55.238 "aliases": [ 00:09:55.238 "4a42daf2-5552-48a2-b8a3-f4dc543c5f72" 00:09:55.238 ], 00:09:55.238 "product_name": "Malloc disk", 00:09:55.238 "block_size": 512, 00:09:55.238 "num_blocks": 65536, 00:09:55.238 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:55.238 "assigned_rate_limits": { 00:09:55.238 "rw_ios_per_sec": 0, 00:09:55.238 "rw_mbytes_per_sec": 0, 00:09:55.238 "r_mbytes_per_sec": 0, 00:09:55.238 "w_mbytes_per_sec": 0 00:09:55.238 }, 00:09:55.238 "claimed": false, 00:09:55.238 "zoned": false, 00:09:55.238 "supported_io_types": { 00:09:55.238 "read": true, 00:09:55.238 "write": true, 00:09:55.238 "unmap": true, 00:09:55.238 "flush": true, 00:09:55.238 "reset": true, 00:09:55.238 "nvme_admin": false, 00:09:55.238 "nvme_io": false, 00:09:55.238 "nvme_io_md": false, 00:09:55.238 "write_zeroes": true, 00:09:55.238 "zcopy": true, 00:09:55.238 "get_zone_info": false, 00:09:55.238 "zone_management": false, 00:09:55.238 "zone_append": false, 00:09:55.238 "compare": false, 00:09:55.238 "compare_and_write": false, 00:09:55.239 "abort": true, 00:09:55.239 "seek_hole": false, 00:09:55.239 "seek_data": false, 00:09:55.239 "copy": true, 00:09:55.239 "nvme_iov_md": false 00:09:55.239 }, 00:09:55.239 "memory_domains": [ 00:09:55.239 { 00:09:55.239 "dma_device_id": "system", 00:09:55.239 "dma_device_type": 1 00:09:55.239 }, 00:09:55.239 { 00:09:55.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.239 "dma_device_type": 2 00:09:55.239 } 00:09:55.239 ], 00:09:55.239 "driver_specific": {} 00:09:55.239 } 00:09:55.239 ] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 BaseBdev3 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 [ 00:09:55.239 { 00:09:55.239 "name": "BaseBdev3", 00:09:55.239 "aliases": [ 00:09:55.239 "49895c84-bffa-4f61-9d16-0215e74c5fcb" 00:09:55.239 ], 00:09:55.239 "product_name": "Malloc disk", 00:09:55.239 "block_size": 512, 00:09:55.239 "num_blocks": 65536, 00:09:55.239 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:55.239 "assigned_rate_limits": { 00:09:55.239 "rw_ios_per_sec": 0, 00:09:55.239 "rw_mbytes_per_sec": 0, 00:09:55.239 "r_mbytes_per_sec": 0, 00:09:55.239 "w_mbytes_per_sec": 0 00:09:55.239 }, 00:09:55.239 "claimed": false, 00:09:55.239 "zoned": false, 00:09:55.239 "supported_io_types": { 00:09:55.239 "read": true, 00:09:55.239 "write": true, 00:09:55.239 "unmap": true, 00:09:55.239 "flush": true, 00:09:55.239 "reset": true, 00:09:55.239 "nvme_admin": false, 00:09:55.239 "nvme_io": false, 00:09:55.239 "nvme_io_md": false, 00:09:55.239 "write_zeroes": true, 00:09:55.239 "zcopy": true, 00:09:55.239 "get_zone_info": false, 00:09:55.239 "zone_management": false, 00:09:55.239 "zone_append": false, 00:09:55.239 "compare": false, 00:09:55.239 "compare_and_write": false, 00:09:55.239 "abort": true, 00:09:55.239 "seek_hole": false, 00:09:55.239 "seek_data": false, 00:09:55.239 "copy": true, 00:09:55.239 "nvme_iov_md": false 00:09:55.239 }, 00:09:55.239 "memory_domains": [ 00:09:55.239 { 00:09:55.239 "dma_device_id": "system", 00:09:55.239 "dma_device_type": 1 00:09:55.239 }, 00:09:55.239 { 00:09:55.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.239 "dma_device_type": 2 00:09:55.239 } 00:09:55.239 ], 00:09:55.239 "driver_specific": {} 00:09:55.239 } 00:09:55.239 ] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 BaseBdev4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 [ 00:09:55.239 { 00:09:55.239 "name": "BaseBdev4", 00:09:55.239 "aliases": [ 00:09:55.239 "ed2cd7f2-b98e-458f-b7f4-289398603321" 00:09:55.239 ], 00:09:55.239 "product_name": "Malloc disk", 00:09:55.239 "block_size": 512, 00:09:55.239 "num_blocks": 65536, 00:09:55.239 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:55.239 "assigned_rate_limits": { 00:09:55.239 "rw_ios_per_sec": 0, 00:09:55.239 "rw_mbytes_per_sec": 0, 00:09:55.239 "r_mbytes_per_sec": 0, 00:09:55.239 "w_mbytes_per_sec": 0 00:09:55.239 }, 00:09:55.239 "claimed": false, 00:09:55.239 "zoned": false, 00:09:55.239 "supported_io_types": { 00:09:55.239 "read": true, 00:09:55.239 "write": true, 00:09:55.239 "unmap": true, 00:09:55.239 "flush": true, 00:09:55.239 "reset": true, 00:09:55.239 "nvme_admin": false, 00:09:55.239 "nvme_io": false, 00:09:55.239 "nvme_io_md": false, 00:09:55.239 "write_zeroes": true, 00:09:55.239 "zcopy": true, 00:09:55.239 "get_zone_info": false, 00:09:55.239 "zone_management": false, 00:09:55.239 "zone_append": false, 00:09:55.239 "compare": false, 00:09:55.239 "compare_and_write": false, 00:09:55.239 "abort": true, 00:09:55.239 "seek_hole": false, 00:09:55.239 "seek_data": false, 00:09:55.239 "copy": true, 00:09:55.239 "nvme_iov_md": false 00:09:55.239 }, 00:09:55.239 "memory_domains": [ 00:09:55.239 { 00:09:55.239 "dma_device_id": "system", 00:09:55.239 "dma_device_type": 1 00:09:55.239 }, 00:09:55.239 { 00:09:55.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.239 "dma_device_type": 2 00:09:55.239 } 00:09:55.239 ], 00:09:55.239 "driver_specific": {} 00:09:55.239 } 00:09:55.239 ] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 [2024-10-25 17:51:13.661550] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.239 [2024-10-25 17:51:13.661591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.239 [2024-10-25 17:51:13.661613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.239 [2024-10-25 17:51:13.663470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.239 [2024-10-25 17:51:13.663530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.239 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.240 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.500 "name": "Existed_Raid", 00:09:55.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.500 "strip_size_kb": 64, 00:09:55.500 "state": "configuring", 00:09:55.500 "raid_level": "raid0", 00:09:55.500 "superblock": false, 00:09:55.500 "num_base_bdevs": 4, 00:09:55.500 "num_base_bdevs_discovered": 3, 00:09:55.500 "num_base_bdevs_operational": 4, 00:09:55.500 "base_bdevs_list": [ 00:09:55.500 { 00:09:55.500 "name": "BaseBdev1", 00:09:55.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.500 "is_configured": false, 00:09:55.500 "data_offset": 0, 00:09:55.500 "data_size": 0 00:09:55.500 }, 00:09:55.500 { 00:09:55.500 "name": "BaseBdev2", 00:09:55.500 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:55.500 "is_configured": true, 00:09:55.500 "data_offset": 0, 00:09:55.500 "data_size": 65536 00:09:55.500 }, 00:09:55.500 { 00:09:55.500 "name": "BaseBdev3", 00:09:55.500 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:55.500 "is_configured": true, 00:09:55.500 "data_offset": 0, 00:09:55.500 "data_size": 65536 00:09:55.500 }, 00:09:55.500 { 00:09:55.500 "name": "BaseBdev4", 00:09:55.500 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:55.500 "is_configured": true, 00:09:55.500 "data_offset": 0, 00:09:55.500 "data_size": 65536 00:09:55.500 } 00:09:55.500 ] 00:09:55.500 }' 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.500 17:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.760 [2024-10-25 17:51:14.156811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.760 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.019 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.019 "name": "Existed_Raid", 00:09:56.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.019 "strip_size_kb": 64, 00:09:56.019 "state": "configuring", 00:09:56.019 "raid_level": "raid0", 00:09:56.019 "superblock": false, 00:09:56.019 "num_base_bdevs": 4, 00:09:56.019 "num_base_bdevs_discovered": 2, 00:09:56.019 "num_base_bdevs_operational": 4, 00:09:56.019 "base_bdevs_list": [ 00:09:56.019 { 00:09:56.019 "name": "BaseBdev1", 00:09:56.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.019 "is_configured": false, 00:09:56.019 "data_offset": 0, 00:09:56.019 "data_size": 0 00:09:56.019 }, 00:09:56.019 { 00:09:56.019 "name": null, 00:09:56.019 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:56.019 "is_configured": false, 00:09:56.019 "data_offset": 0, 00:09:56.019 "data_size": 65536 00:09:56.019 }, 00:09:56.019 { 00:09:56.019 "name": "BaseBdev3", 00:09:56.019 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:56.019 "is_configured": true, 00:09:56.019 "data_offset": 0, 00:09:56.019 "data_size": 65536 00:09:56.019 }, 00:09:56.019 { 00:09:56.019 "name": "BaseBdev4", 00:09:56.019 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:56.019 "is_configured": true, 00:09:56.019 "data_offset": 0, 00:09:56.019 "data_size": 65536 00:09:56.019 } 00:09:56.019 ] 00:09:56.019 }' 00:09:56.019 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.019 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 [2024-10-25 17:51:14.620437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.278 BaseBdev1 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.278 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.278 [ 00:09:56.278 { 00:09:56.278 "name": "BaseBdev1", 00:09:56.278 "aliases": [ 00:09:56.278 "d2a1b0e9-4d67-4840-bec4-491173611565" 00:09:56.278 ], 00:09:56.278 "product_name": "Malloc disk", 00:09:56.278 "block_size": 512, 00:09:56.278 "num_blocks": 65536, 00:09:56.278 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:56.278 "assigned_rate_limits": { 00:09:56.278 "rw_ios_per_sec": 0, 00:09:56.278 "rw_mbytes_per_sec": 0, 00:09:56.278 "r_mbytes_per_sec": 0, 00:09:56.278 "w_mbytes_per_sec": 0 00:09:56.278 }, 00:09:56.278 "claimed": true, 00:09:56.278 "claim_type": "exclusive_write", 00:09:56.278 "zoned": false, 00:09:56.278 "supported_io_types": { 00:09:56.278 "read": true, 00:09:56.278 "write": true, 00:09:56.278 "unmap": true, 00:09:56.278 "flush": true, 00:09:56.278 "reset": true, 00:09:56.278 "nvme_admin": false, 00:09:56.278 "nvme_io": false, 00:09:56.278 "nvme_io_md": false, 00:09:56.278 "write_zeroes": true, 00:09:56.279 "zcopy": true, 00:09:56.279 "get_zone_info": false, 00:09:56.279 "zone_management": false, 00:09:56.279 "zone_append": false, 00:09:56.279 "compare": false, 00:09:56.279 "compare_and_write": false, 00:09:56.279 "abort": true, 00:09:56.279 "seek_hole": false, 00:09:56.279 "seek_data": false, 00:09:56.279 "copy": true, 00:09:56.279 "nvme_iov_md": false 00:09:56.279 }, 00:09:56.279 "memory_domains": [ 00:09:56.279 { 00:09:56.279 "dma_device_id": "system", 00:09:56.279 "dma_device_type": 1 00:09:56.279 }, 00:09:56.279 { 00:09:56.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.279 "dma_device_type": 2 00:09:56.279 } 00:09:56.279 ], 00:09:56.279 "driver_specific": {} 00:09:56.279 } 00:09:56.279 ] 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.279 "name": "Existed_Raid", 00:09:56.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.279 "strip_size_kb": 64, 00:09:56.279 "state": "configuring", 00:09:56.279 "raid_level": "raid0", 00:09:56.279 "superblock": false, 00:09:56.279 "num_base_bdevs": 4, 00:09:56.279 "num_base_bdevs_discovered": 3, 00:09:56.279 "num_base_bdevs_operational": 4, 00:09:56.279 "base_bdevs_list": [ 00:09:56.279 { 00:09:56.279 "name": "BaseBdev1", 00:09:56.279 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:56.279 "is_configured": true, 00:09:56.279 "data_offset": 0, 00:09:56.279 "data_size": 65536 00:09:56.279 }, 00:09:56.279 { 00:09:56.279 "name": null, 00:09:56.279 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:56.279 "is_configured": false, 00:09:56.279 "data_offset": 0, 00:09:56.279 "data_size": 65536 00:09:56.279 }, 00:09:56.279 { 00:09:56.279 "name": "BaseBdev3", 00:09:56.279 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:56.279 "is_configured": true, 00:09:56.279 "data_offset": 0, 00:09:56.279 "data_size": 65536 00:09:56.279 }, 00:09:56.279 { 00:09:56.279 "name": "BaseBdev4", 00:09:56.279 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:56.279 "is_configured": true, 00:09:56.279 "data_offset": 0, 00:09:56.279 "data_size": 65536 00:09:56.279 } 00:09:56.279 ] 00:09:56.279 }' 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.279 17:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.857 [2024-10-25 17:51:15.159599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.857 "name": "Existed_Raid", 00:09:56.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.857 "strip_size_kb": 64, 00:09:56.857 "state": "configuring", 00:09:56.857 "raid_level": "raid0", 00:09:56.857 "superblock": false, 00:09:56.857 "num_base_bdevs": 4, 00:09:56.857 "num_base_bdevs_discovered": 2, 00:09:56.857 "num_base_bdevs_operational": 4, 00:09:56.857 "base_bdevs_list": [ 00:09:56.857 { 00:09:56.857 "name": "BaseBdev1", 00:09:56.857 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:56.857 "is_configured": true, 00:09:56.857 "data_offset": 0, 00:09:56.857 "data_size": 65536 00:09:56.857 }, 00:09:56.857 { 00:09:56.857 "name": null, 00:09:56.857 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:56.857 "is_configured": false, 00:09:56.857 "data_offset": 0, 00:09:56.857 "data_size": 65536 00:09:56.857 }, 00:09:56.857 { 00:09:56.857 "name": null, 00:09:56.857 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:56.857 "is_configured": false, 00:09:56.857 "data_offset": 0, 00:09:56.857 "data_size": 65536 00:09:56.857 }, 00:09:56.857 { 00:09:56.857 "name": "BaseBdev4", 00:09:56.857 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:56.857 "is_configured": true, 00:09:56.857 "data_offset": 0, 00:09:56.857 "data_size": 65536 00:09:56.857 } 00:09:56.857 ] 00:09:56.857 }' 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.857 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.134 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.134 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.134 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.134 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.394 [2024-10-25 17:51:15.590865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.394 "name": "Existed_Raid", 00:09:57.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.394 "strip_size_kb": 64, 00:09:57.394 "state": "configuring", 00:09:57.394 "raid_level": "raid0", 00:09:57.394 "superblock": false, 00:09:57.394 "num_base_bdevs": 4, 00:09:57.394 "num_base_bdevs_discovered": 3, 00:09:57.394 "num_base_bdevs_operational": 4, 00:09:57.394 "base_bdevs_list": [ 00:09:57.394 { 00:09:57.394 "name": "BaseBdev1", 00:09:57.394 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:57.394 "is_configured": true, 00:09:57.394 "data_offset": 0, 00:09:57.394 "data_size": 65536 00:09:57.394 }, 00:09:57.394 { 00:09:57.394 "name": null, 00:09:57.394 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:57.394 "is_configured": false, 00:09:57.394 "data_offset": 0, 00:09:57.394 "data_size": 65536 00:09:57.394 }, 00:09:57.394 { 00:09:57.394 "name": "BaseBdev3", 00:09:57.394 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:57.394 "is_configured": true, 00:09:57.394 "data_offset": 0, 00:09:57.394 "data_size": 65536 00:09:57.394 }, 00:09:57.394 { 00:09:57.394 "name": "BaseBdev4", 00:09:57.394 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:57.394 "is_configured": true, 00:09:57.394 "data_offset": 0, 00:09:57.394 "data_size": 65536 00:09:57.394 } 00:09:57.394 ] 00:09:57.394 }' 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.394 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.655 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.655 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.655 17:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.655 17:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.655 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.655 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:57.655 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.655 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.655 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.655 [2024-10-25 17:51:16.046158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.915 "name": "Existed_Raid", 00:09:57.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.915 "strip_size_kb": 64, 00:09:57.915 "state": "configuring", 00:09:57.915 "raid_level": "raid0", 00:09:57.915 "superblock": false, 00:09:57.915 "num_base_bdevs": 4, 00:09:57.915 "num_base_bdevs_discovered": 2, 00:09:57.915 "num_base_bdevs_operational": 4, 00:09:57.915 "base_bdevs_list": [ 00:09:57.915 { 00:09:57.915 "name": null, 00:09:57.915 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:57.915 "is_configured": false, 00:09:57.915 "data_offset": 0, 00:09:57.915 "data_size": 65536 00:09:57.915 }, 00:09:57.915 { 00:09:57.915 "name": null, 00:09:57.915 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:57.915 "is_configured": false, 00:09:57.915 "data_offset": 0, 00:09:57.915 "data_size": 65536 00:09:57.915 }, 00:09:57.915 { 00:09:57.915 "name": "BaseBdev3", 00:09:57.915 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:57.915 "is_configured": true, 00:09:57.915 "data_offset": 0, 00:09:57.915 "data_size": 65536 00:09:57.915 }, 00:09:57.915 { 00:09:57.915 "name": "BaseBdev4", 00:09:57.915 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:57.915 "is_configured": true, 00:09:57.915 "data_offset": 0, 00:09:57.915 "data_size": 65536 00:09:57.915 } 00:09:57.915 ] 00:09:57.915 }' 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.915 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 [2024-10-25 17:51:16.605636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.435 "name": "Existed_Raid", 00:09:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.435 "strip_size_kb": 64, 00:09:58.435 "state": "configuring", 00:09:58.435 "raid_level": "raid0", 00:09:58.435 "superblock": false, 00:09:58.435 "num_base_bdevs": 4, 00:09:58.435 "num_base_bdevs_discovered": 3, 00:09:58.435 "num_base_bdevs_operational": 4, 00:09:58.435 "base_bdevs_list": [ 00:09:58.435 { 00:09:58.435 "name": null, 00:09:58.435 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:58.435 "is_configured": false, 00:09:58.435 "data_offset": 0, 00:09:58.435 "data_size": 65536 00:09:58.435 }, 00:09:58.435 { 00:09:58.435 "name": "BaseBdev2", 00:09:58.435 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:58.435 "is_configured": true, 00:09:58.435 "data_offset": 0, 00:09:58.435 "data_size": 65536 00:09:58.435 }, 00:09:58.435 { 00:09:58.435 "name": "BaseBdev3", 00:09:58.435 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:58.435 "is_configured": true, 00:09:58.435 "data_offset": 0, 00:09:58.435 "data_size": 65536 00:09:58.435 }, 00:09:58.435 { 00:09:58.435 "name": "BaseBdev4", 00:09:58.435 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:58.435 "is_configured": true, 00:09:58.435 "data_offset": 0, 00:09:58.435 "data_size": 65536 00:09:58.435 } 00:09:58.435 ] 00:09:58.435 }' 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.435 17:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.696 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d2a1b0e9-4d67-4840-bec4-491173611565 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.956 [2024-10-25 17:51:17.185234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:58.956 [2024-10-25 17:51:17.185287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.956 [2024-10-25 17:51:17.185294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:58.956 [2024-10-25 17:51:17.185554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:58.956 [2024-10-25 17:51:17.185720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.956 [2024-10-25 17:51:17.185740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:58.956 [2024-10-25 17:51:17.185995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.956 NewBaseBdev 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.956 [ 00:09:58.956 { 00:09:58.956 "name": "NewBaseBdev", 00:09:58.956 "aliases": [ 00:09:58.956 "d2a1b0e9-4d67-4840-bec4-491173611565" 00:09:58.956 ], 00:09:58.956 "product_name": "Malloc disk", 00:09:58.956 "block_size": 512, 00:09:58.956 "num_blocks": 65536, 00:09:58.956 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:58.956 "assigned_rate_limits": { 00:09:58.956 "rw_ios_per_sec": 0, 00:09:58.956 "rw_mbytes_per_sec": 0, 00:09:58.956 "r_mbytes_per_sec": 0, 00:09:58.956 "w_mbytes_per_sec": 0 00:09:58.956 }, 00:09:58.956 "claimed": true, 00:09:58.956 "claim_type": "exclusive_write", 00:09:58.956 "zoned": false, 00:09:58.956 "supported_io_types": { 00:09:58.956 "read": true, 00:09:58.956 "write": true, 00:09:58.956 "unmap": true, 00:09:58.956 "flush": true, 00:09:58.956 "reset": true, 00:09:58.956 "nvme_admin": false, 00:09:58.956 "nvme_io": false, 00:09:58.956 "nvme_io_md": false, 00:09:58.956 "write_zeroes": true, 00:09:58.956 "zcopy": true, 00:09:58.956 "get_zone_info": false, 00:09:58.956 "zone_management": false, 00:09:58.956 "zone_append": false, 00:09:58.956 "compare": false, 00:09:58.956 "compare_and_write": false, 00:09:58.956 "abort": true, 00:09:58.956 "seek_hole": false, 00:09:58.956 "seek_data": false, 00:09:58.956 "copy": true, 00:09:58.956 "nvme_iov_md": false 00:09:58.956 }, 00:09:58.956 "memory_domains": [ 00:09:58.956 { 00:09:58.956 "dma_device_id": "system", 00:09:58.956 "dma_device_type": 1 00:09:58.956 }, 00:09:58.956 { 00:09:58.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.956 "dma_device_type": 2 00:09:58.956 } 00:09:58.956 ], 00:09:58.956 "driver_specific": {} 00:09:58.956 } 00:09:58.956 ] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.956 "name": "Existed_Raid", 00:09:58.956 "uuid": "3e1de971-cf31-4202-a17c-67173455ae24", 00:09:58.956 "strip_size_kb": 64, 00:09:58.956 "state": "online", 00:09:58.956 "raid_level": "raid0", 00:09:58.956 "superblock": false, 00:09:58.956 "num_base_bdevs": 4, 00:09:58.956 "num_base_bdevs_discovered": 4, 00:09:58.956 "num_base_bdevs_operational": 4, 00:09:58.956 "base_bdevs_list": [ 00:09:58.956 { 00:09:58.956 "name": "NewBaseBdev", 00:09:58.956 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:58.956 "is_configured": true, 00:09:58.956 "data_offset": 0, 00:09:58.956 "data_size": 65536 00:09:58.956 }, 00:09:58.956 { 00:09:58.956 "name": "BaseBdev2", 00:09:58.956 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:58.956 "is_configured": true, 00:09:58.956 "data_offset": 0, 00:09:58.956 "data_size": 65536 00:09:58.956 }, 00:09:58.956 { 00:09:58.956 "name": "BaseBdev3", 00:09:58.956 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:58.956 "is_configured": true, 00:09:58.956 "data_offset": 0, 00:09:58.956 "data_size": 65536 00:09:58.956 }, 00:09:58.956 { 00:09:58.956 "name": "BaseBdev4", 00:09:58.956 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:58.956 "is_configured": true, 00:09:58.956 "data_offset": 0, 00:09:58.956 "data_size": 65536 00:09:58.956 } 00:09:58.956 ] 00:09:58.956 }' 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.956 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.217 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.477 [2024-10-25 17:51:17.660832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.477 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.477 "name": "Existed_Raid", 00:09:59.477 "aliases": [ 00:09:59.477 "3e1de971-cf31-4202-a17c-67173455ae24" 00:09:59.477 ], 00:09:59.477 "product_name": "Raid Volume", 00:09:59.477 "block_size": 512, 00:09:59.477 "num_blocks": 262144, 00:09:59.477 "uuid": "3e1de971-cf31-4202-a17c-67173455ae24", 00:09:59.477 "assigned_rate_limits": { 00:09:59.477 "rw_ios_per_sec": 0, 00:09:59.477 "rw_mbytes_per_sec": 0, 00:09:59.477 "r_mbytes_per_sec": 0, 00:09:59.477 "w_mbytes_per_sec": 0 00:09:59.477 }, 00:09:59.477 "claimed": false, 00:09:59.477 "zoned": false, 00:09:59.477 "supported_io_types": { 00:09:59.477 "read": true, 00:09:59.477 "write": true, 00:09:59.477 "unmap": true, 00:09:59.477 "flush": true, 00:09:59.477 "reset": true, 00:09:59.477 "nvme_admin": false, 00:09:59.477 "nvme_io": false, 00:09:59.477 "nvme_io_md": false, 00:09:59.477 "write_zeroes": true, 00:09:59.477 "zcopy": false, 00:09:59.477 "get_zone_info": false, 00:09:59.477 "zone_management": false, 00:09:59.477 "zone_append": false, 00:09:59.477 "compare": false, 00:09:59.477 "compare_and_write": false, 00:09:59.477 "abort": false, 00:09:59.477 "seek_hole": false, 00:09:59.477 "seek_data": false, 00:09:59.477 "copy": false, 00:09:59.478 "nvme_iov_md": false 00:09:59.478 }, 00:09:59.478 "memory_domains": [ 00:09:59.478 { 00:09:59.478 "dma_device_id": "system", 00:09:59.478 "dma_device_type": 1 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.478 "dma_device_type": 2 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "system", 00:09:59.478 "dma_device_type": 1 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.478 "dma_device_type": 2 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "system", 00:09:59.478 "dma_device_type": 1 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.478 "dma_device_type": 2 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "system", 00:09:59.478 "dma_device_type": 1 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.478 "dma_device_type": 2 00:09:59.478 } 00:09:59.478 ], 00:09:59.478 "driver_specific": { 00:09:59.478 "raid": { 00:09:59.478 "uuid": "3e1de971-cf31-4202-a17c-67173455ae24", 00:09:59.478 "strip_size_kb": 64, 00:09:59.478 "state": "online", 00:09:59.478 "raid_level": "raid0", 00:09:59.478 "superblock": false, 00:09:59.478 "num_base_bdevs": 4, 00:09:59.478 "num_base_bdevs_discovered": 4, 00:09:59.478 "num_base_bdevs_operational": 4, 00:09:59.478 "base_bdevs_list": [ 00:09:59.478 { 00:09:59.478 "name": "NewBaseBdev", 00:09:59.478 "uuid": "d2a1b0e9-4d67-4840-bec4-491173611565", 00:09:59.478 "is_configured": true, 00:09:59.478 "data_offset": 0, 00:09:59.478 "data_size": 65536 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "name": "BaseBdev2", 00:09:59.478 "uuid": "4a42daf2-5552-48a2-b8a3-f4dc543c5f72", 00:09:59.478 "is_configured": true, 00:09:59.478 "data_offset": 0, 00:09:59.478 "data_size": 65536 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "name": "BaseBdev3", 00:09:59.478 "uuid": "49895c84-bffa-4f61-9d16-0215e74c5fcb", 00:09:59.478 "is_configured": true, 00:09:59.478 "data_offset": 0, 00:09:59.478 "data_size": 65536 00:09:59.478 }, 00:09:59.478 { 00:09:59.478 "name": "BaseBdev4", 00:09:59.478 "uuid": "ed2cd7f2-b98e-458f-b7f4-289398603321", 00:09:59.478 "is_configured": true, 00:09:59.478 "data_offset": 0, 00:09:59.478 "data_size": 65536 00:09:59.478 } 00:09:59.478 ] 00:09:59.478 } 00:09:59.478 } 00:09:59.478 }' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:59.478 BaseBdev2 00:09:59.478 BaseBdev3 00:09:59.478 BaseBdev4' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.478 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.738 [2024-10-25 17:51:17.967963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.738 [2024-10-25 17:51:17.967996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.738 [2024-10-25 17:51:17.968074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.738 [2024-10-25 17:51:17.968141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.738 [2024-10-25 17:51:17.968155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69120 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69120 ']' 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69120 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.738 17:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69120 00:09:59.738 killing process with pid 69120 00:09:59.738 17:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.738 17:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.738 17:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69120' 00:09:59.738 17:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69120 00:09:59.738 [2024-10-25 17:51:18.016902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.738 17:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69120 00:09:59.998 [2024-10-25 17:51:18.399050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.381 00:10:01.381 real 0m11.106s 00:10:01.381 user 0m17.594s 00:10:01.381 sys 0m2.078s 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.381 ************************************ 00:10:01.381 END TEST raid_state_function_test 00:10:01.381 ************************************ 00:10:01.381 17:51:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:01.381 17:51:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.381 17:51:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.381 17:51:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.381 ************************************ 00:10:01.381 START TEST raid_state_function_test_sb 00:10:01.381 ************************************ 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69787 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69787' 00:10:01.381 Process raid pid: 69787 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69787 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 69787 ']' 00:10:01.381 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.382 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.382 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.382 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.382 17:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.382 [2024-10-25 17:51:19.647802] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:01.382 [2024-10-25 17:51:19.647941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.641 [2024-10-25 17:51:19.829650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.641 [2024-10-25 17:51:19.942743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.901 [2024-10-25 17:51:20.144414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.901 [2024-10-25 17:51:20.144450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 [2024-10-25 17:51:20.471407] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.161 [2024-10-25 17:51:20.471460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.161 [2024-10-25 17:51:20.471470] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.161 [2024-10-25 17:51:20.471478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.161 [2024-10-25 17:51:20.471484] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.161 [2024-10-25 17:51:20.471493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.161 [2024-10-25 17:51:20.471499] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.161 [2024-10-25 17:51:20.471507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.161 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.162 "name": "Existed_Raid", 00:10:02.162 "uuid": "c615d85f-8cae-4af9-baa3-d24f45386b44", 00:10:02.162 "strip_size_kb": 64, 00:10:02.162 "state": "configuring", 00:10:02.162 "raid_level": "raid0", 00:10:02.162 "superblock": true, 00:10:02.162 "num_base_bdevs": 4, 00:10:02.162 "num_base_bdevs_discovered": 0, 00:10:02.162 "num_base_bdevs_operational": 4, 00:10:02.162 "base_bdevs_list": [ 00:10:02.162 { 00:10:02.162 "name": "BaseBdev1", 00:10:02.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.162 "is_configured": false, 00:10:02.162 "data_offset": 0, 00:10:02.162 "data_size": 0 00:10:02.162 }, 00:10:02.162 { 00:10:02.162 "name": "BaseBdev2", 00:10:02.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.162 "is_configured": false, 00:10:02.162 "data_offset": 0, 00:10:02.162 "data_size": 0 00:10:02.162 }, 00:10:02.162 { 00:10:02.162 "name": "BaseBdev3", 00:10:02.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.162 "is_configured": false, 00:10:02.162 "data_offset": 0, 00:10:02.162 "data_size": 0 00:10:02.162 }, 00:10:02.162 { 00:10:02.162 "name": "BaseBdev4", 00:10:02.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.162 "is_configured": false, 00:10:02.162 "data_offset": 0, 00:10:02.162 "data_size": 0 00:10:02.162 } 00:10:02.162 ] 00:10:02.162 }' 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.162 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.731 [2024-10-25 17:51:20.942583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.731 [2024-10-25 17:51:20.942625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.731 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 [2024-10-25 17:51:20.954562] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.732 [2024-10-25 17:51:20.954619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.732 [2024-10-25 17:51:20.954628] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.732 [2024-10-25 17:51:20.954638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.732 [2024-10-25 17:51:20.954644] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.732 [2024-10-25 17:51:20.954653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.732 [2024-10-25 17:51:20.954659] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.732 [2024-10-25 17:51:20.954668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.732 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 17:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.732 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 17:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 [2024-10-25 17:51:21.003424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.732 BaseBdev1 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 [ 00:10:02.732 { 00:10:02.732 "name": "BaseBdev1", 00:10:02.732 "aliases": [ 00:10:02.732 "8091e807-4e96-4cb6-9c4a-c0386da952de" 00:10:02.732 ], 00:10:02.732 "product_name": "Malloc disk", 00:10:02.732 "block_size": 512, 00:10:02.732 "num_blocks": 65536, 00:10:02.732 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:02.732 "assigned_rate_limits": { 00:10:02.732 "rw_ios_per_sec": 0, 00:10:02.732 "rw_mbytes_per_sec": 0, 00:10:02.732 "r_mbytes_per_sec": 0, 00:10:02.732 "w_mbytes_per_sec": 0 00:10:02.732 }, 00:10:02.732 "claimed": true, 00:10:02.732 "claim_type": "exclusive_write", 00:10:02.732 "zoned": false, 00:10:02.732 "supported_io_types": { 00:10:02.732 "read": true, 00:10:02.732 "write": true, 00:10:02.732 "unmap": true, 00:10:02.732 "flush": true, 00:10:02.732 "reset": true, 00:10:02.732 "nvme_admin": false, 00:10:02.732 "nvme_io": false, 00:10:02.732 "nvme_io_md": false, 00:10:02.732 "write_zeroes": true, 00:10:02.732 "zcopy": true, 00:10:02.732 "get_zone_info": false, 00:10:02.732 "zone_management": false, 00:10:02.732 "zone_append": false, 00:10:02.732 "compare": false, 00:10:02.732 "compare_and_write": false, 00:10:02.732 "abort": true, 00:10:02.732 "seek_hole": false, 00:10:02.732 "seek_data": false, 00:10:02.732 "copy": true, 00:10:02.732 "nvme_iov_md": false 00:10:02.732 }, 00:10:02.732 "memory_domains": [ 00:10:02.732 { 00:10:02.732 "dma_device_id": "system", 00:10:02.732 "dma_device_type": 1 00:10:02.732 }, 00:10:02.732 { 00:10:02.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.732 "dma_device_type": 2 00:10:02.732 } 00:10:02.732 ], 00:10:02.732 "driver_specific": {} 00:10:02.732 } 00:10:02.732 ] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.732 "name": "Existed_Raid", 00:10:02.732 "uuid": "c9fa4a32-d7f8-4cef-a42d-19f1747e200e", 00:10:02.732 "strip_size_kb": 64, 00:10:02.732 "state": "configuring", 00:10:02.732 "raid_level": "raid0", 00:10:02.732 "superblock": true, 00:10:02.732 "num_base_bdevs": 4, 00:10:02.732 "num_base_bdevs_discovered": 1, 00:10:02.732 "num_base_bdevs_operational": 4, 00:10:02.732 "base_bdevs_list": [ 00:10:02.732 { 00:10:02.732 "name": "BaseBdev1", 00:10:02.732 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:02.732 "is_configured": true, 00:10:02.732 "data_offset": 2048, 00:10:02.732 "data_size": 63488 00:10:02.732 }, 00:10:02.732 { 00:10:02.732 "name": "BaseBdev2", 00:10:02.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.732 "is_configured": false, 00:10:02.732 "data_offset": 0, 00:10:02.732 "data_size": 0 00:10:02.732 }, 00:10:02.732 { 00:10:02.732 "name": "BaseBdev3", 00:10:02.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.732 "is_configured": false, 00:10:02.732 "data_offset": 0, 00:10:02.732 "data_size": 0 00:10:02.732 }, 00:10:02.732 { 00:10:02.732 "name": "BaseBdev4", 00:10:02.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.732 "is_configured": false, 00:10:02.732 "data_offset": 0, 00:10:02.732 "data_size": 0 00:10:02.732 } 00:10:02.732 ] 00:10:02.732 }' 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.732 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 [2024-10-25 17:51:21.486640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.303 [2024-10-25 17:51:21.486701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 [2024-10-25 17:51:21.498676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.303 [2024-10-25 17:51:21.500437] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.303 [2024-10-25 17:51:21.500480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.303 [2024-10-25 17:51:21.500489] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.303 [2024-10-25 17:51:21.500499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.303 [2024-10-25 17:51:21.500507] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.303 [2024-10-25 17:51:21.500515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.303 "name": "Existed_Raid", 00:10:03.303 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:03.303 "strip_size_kb": 64, 00:10:03.303 "state": "configuring", 00:10:03.303 "raid_level": "raid0", 00:10:03.303 "superblock": true, 00:10:03.303 "num_base_bdevs": 4, 00:10:03.303 "num_base_bdevs_discovered": 1, 00:10:03.303 "num_base_bdevs_operational": 4, 00:10:03.303 "base_bdevs_list": [ 00:10:03.303 { 00:10:03.303 "name": "BaseBdev1", 00:10:03.303 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:03.303 "is_configured": true, 00:10:03.303 "data_offset": 2048, 00:10:03.303 "data_size": 63488 00:10:03.303 }, 00:10:03.303 { 00:10:03.303 "name": "BaseBdev2", 00:10:03.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.303 "is_configured": false, 00:10:03.303 "data_offset": 0, 00:10:03.303 "data_size": 0 00:10:03.303 }, 00:10:03.303 { 00:10:03.303 "name": "BaseBdev3", 00:10:03.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.303 "is_configured": false, 00:10:03.303 "data_offset": 0, 00:10:03.303 "data_size": 0 00:10:03.303 }, 00:10:03.303 { 00:10:03.303 "name": "BaseBdev4", 00:10:03.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.303 "is_configured": false, 00:10:03.304 "data_offset": 0, 00:10:03.304 "data_size": 0 00:10:03.304 } 00:10:03.304 ] 00:10:03.304 }' 00:10:03.304 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.304 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 [2024-10-25 17:51:21.926956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.564 BaseBdev2 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 [ 00:10:03.564 { 00:10:03.564 "name": "BaseBdev2", 00:10:03.564 "aliases": [ 00:10:03.564 "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8" 00:10:03.564 ], 00:10:03.564 "product_name": "Malloc disk", 00:10:03.564 "block_size": 512, 00:10:03.564 "num_blocks": 65536, 00:10:03.564 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:03.564 "assigned_rate_limits": { 00:10:03.564 "rw_ios_per_sec": 0, 00:10:03.564 "rw_mbytes_per_sec": 0, 00:10:03.564 "r_mbytes_per_sec": 0, 00:10:03.564 "w_mbytes_per_sec": 0 00:10:03.564 }, 00:10:03.564 "claimed": true, 00:10:03.564 "claim_type": "exclusive_write", 00:10:03.564 "zoned": false, 00:10:03.564 "supported_io_types": { 00:10:03.564 "read": true, 00:10:03.564 "write": true, 00:10:03.564 "unmap": true, 00:10:03.564 "flush": true, 00:10:03.564 "reset": true, 00:10:03.564 "nvme_admin": false, 00:10:03.564 "nvme_io": false, 00:10:03.564 "nvme_io_md": false, 00:10:03.564 "write_zeroes": true, 00:10:03.564 "zcopy": true, 00:10:03.564 "get_zone_info": false, 00:10:03.564 "zone_management": false, 00:10:03.564 "zone_append": false, 00:10:03.564 "compare": false, 00:10:03.564 "compare_and_write": false, 00:10:03.564 "abort": true, 00:10:03.564 "seek_hole": false, 00:10:03.564 "seek_data": false, 00:10:03.564 "copy": true, 00:10:03.564 "nvme_iov_md": false 00:10:03.564 }, 00:10:03.564 "memory_domains": [ 00:10:03.564 { 00:10:03.564 "dma_device_id": "system", 00:10:03.564 "dma_device_type": 1 00:10:03.564 }, 00:10:03.564 { 00:10:03.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.564 "dma_device_type": 2 00:10:03.564 } 00:10:03.564 ], 00:10:03.564 "driver_specific": {} 00:10:03.564 } 00:10:03.564 ] 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 17:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.825 17:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.825 "name": "Existed_Raid", 00:10:03.825 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:03.825 "strip_size_kb": 64, 00:10:03.825 "state": "configuring", 00:10:03.825 "raid_level": "raid0", 00:10:03.825 "superblock": true, 00:10:03.825 "num_base_bdevs": 4, 00:10:03.825 "num_base_bdevs_discovered": 2, 00:10:03.825 "num_base_bdevs_operational": 4, 00:10:03.825 "base_bdevs_list": [ 00:10:03.825 { 00:10:03.825 "name": "BaseBdev1", 00:10:03.825 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:03.825 "is_configured": true, 00:10:03.825 "data_offset": 2048, 00:10:03.825 "data_size": 63488 00:10:03.825 }, 00:10:03.825 { 00:10:03.825 "name": "BaseBdev2", 00:10:03.825 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:03.825 "is_configured": true, 00:10:03.825 "data_offset": 2048, 00:10:03.825 "data_size": 63488 00:10:03.825 }, 00:10:03.825 { 00:10:03.825 "name": "BaseBdev3", 00:10:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.825 "is_configured": false, 00:10:03.825 "data_offset": 0, 00:10:03.825 "data_size": 0 00:10:03.825 }, 00:10:03.825 { 00:10:03.825 "name": "BaseBdev4", 00:10:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.825 "is_configured": false, 00:10:03.825 "data_offset": 0, 00:10:03.825 "data_size": 0 00:10:03.825 } 00:10:03.825 ] 00:10:03.825 }' 00:10:03.825 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.825 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.085 [2024-10-25 17:51:22.460497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.085 BaseBdev3 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.085 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.086 [ 00:10:04.086 { 00:10:04.086 "name": "BaseBdev3", 00:10:04.086 "aliases": [ 00:10:04.086 "97c21bbc-013e-462a-98e7-52aa0ef97400" 00:10:04.086 ], 00:10:04.086 "product_name": "Malloc disk", 00:10:04.086 "block_size": 512, 00:10:04.086 "num_blocks": 65536, 00:10:04.086 "uuid": "97c21bbc-013e-462a-98e7-52aa0ef97400", 00:10:04.086 "assigned_rate_limits": { 00:10:04.086 "rw_ios_per_sec": 0, 00:10:04.086 "rw_mbytes_per_sec": 0, 00:10:04.086 "r_mbytes_per_sec": 0, 00:10:04.086 "w_mbytes_per_sec": 0 00:10:04.086 }, 00:10:04.086 "claimed": true, 00:10:04.086 "claim_type": "exclusive_write", 00:10:04.086 "zoned": false, 00:10:04.086 "supported_io_types": { 00:10:04.086 "read": true, 00:10:04.086 "write": true, 00:10:04.086 "unmap": true, 00:10:04.086 "flush": true, 00:10:04.086 "reset": true, 00:10:04.086 "nvme_admin": false, 00:10:04.086 "nvme_io": false, 00:10:04.086 "nvme_io_md": false, 00:10:04.086 "write_zeroes": true, 00:10:04.086 "zcopy": true, 00:10:04.086 "get_zone_info": false, 00:10:04.086 "zone_management": false, 00:10:04.086 "zone_append": false, 00:10:04.086 "compare": false, 00:10:04.086 "compare_and_write": false, 00:10:04.086 "abort": true, 00:10:04.086 "seek_hole": false, 00:10:04.086 "seek_data": false, 00:10:04.086 "copy": true, 00:10:04.086 "nvme_iov_md": false 00:10:04.086 }, 00:10:04.086 "memory_domains": [ 00:10:04.086 { 00:10:04.086 "dma_device_id": "system", 00:10:04.086 "dma_device_type": 1 00:10:04.086 }, 00:10:04.086 { 00:10:04.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.086 "dma_device_type": 2 00:10:04.086 } 00:10:04.086 ], 00:10:04.086 "driver_specific": {} 00:10:04.086 } 00:10:04.086 ] 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.086 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.346 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.346 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.346 "name": "Existed_Raid", 00:10:04.346 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:04.346 "strip_size_kb": 64, 00:10:04.346 "state": "configuring", 00:10:04.346 "raid_level": "raid0", 00:10:04.346 "superblock": true, 00:10:04.346 "num_base_bdevs": 4, 00:10:04.346 "num_base_bdevs_discovered": 3, 00:10:04.346 "num_base_bdevs_operational": 4, 00:10:04.346 "base_bdevs_list": [ 00:10:04.346 { 00:10:04.346 "name": "BaseBdev1", 00:10:04.346 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:04.346 "is_configured": true, 00:10:04.346 "data_offset": 2048, 00:10:04.346 "data_size": 63488 00:10:04.346 }, 00:10:04.346 { 00:10:04.346 "name": "BaseBdev2", 00:10:04.346 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:04.346 "is_configured": true, 00:10:04.346 "data_offset": 2048, 00:10:04.346 "data_size": 63488 00:10:04.346 }, 00:10:04.346 { 00:10:04.346 "name": "BaseBdev3", 00:10:04.346 "uuid": "97c21bbc-013e-462a-98e7-52aa0ef97400", 00:10:04.346 "is_configured": true, 00:10:04.346 "data_offset": 2048, 00:10:04.346 "data_size": 63488 00:10:04.346 }, 00:10:04.346 { 00:10:04.346 "name": "BaseBdev4", 00:10:04.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.346 "is_configured": false, 00:10:04.346 "data_offset": 0, 00:10:04.346 "data_size": 0 00:10:04.346 } 00:10:04.346 ] 00:10:04.346 }' 00:10:04.346 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.346 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.607 [2024-10-25 17:51:22.933116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.607 [2024-10-25 17:51:22.933371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.607 [2024-10-25 17:51:22.933386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.607 [2024-10-25 17:51:22.933641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.607 [2024-10-25 17:51:22.933816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.607 [2024-10-25 17:51:22.933849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.607 BaseBdev4 00:10:04.607 [2024-10-25 17:51:22.933992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.607 [ 00:10:04.607 { 00:10:04.607 "name": "BaseBdev4", 00:10:04.607 "aliases": [ 00:10:04.607 "92432d8d-26f4-42b7-8236-5f9adc0acfdb" 00:10:04.607 ], 00:10:04.607 "product_name": "Malloc disk", 00:10:04.607 "block_size": 512, 00:10:04.607 "num_blocks": 65536, 00:10:04.607 "uuid": "92432d8d-26f4-42b7-8236-5f9adc0acfdb", 00:10:04.607 "assigned_rate_limits": { 00:10:04.607 "rw_ios_per_sec": 0, 00:10:04.607 "rw_mbytes_per_sec": 0, 00:10:04.607 "r_mbytes_per_sec": 0, 00:10:04.607 "w_mbytes_per_sec": 0 00:10:04.607 }, 00:10:04.607 "claimed": true, 00:10:04.607 "claim_type": "exclusive_write", 00:10:04.607 "zoned": false, 00:10:04.607 "supported_io_types": { 00:10:04.607 "read": true, 00:10:04.607 "write": true, 00:10:04.607 "unmap": true, 00:10:04.607 "flush": true, 00:10:04.607 "reset": true, 00:10:04.607 "nvme_admin": false, 00:10:04.607 "nvme_io": false, 00:10:04.607 "nvme_io_md": false, 00:10:04.607 "write_zeroes": true, 00:10:04.607 "zcopy": true, 00:10:04.607 "get_zone_info": false, 00:10:04.607 "zone_management": false, 00:10:04.607 "zone_append": false, 00:10:04.607 "compare": false, 00:10:04.607 "compare_and_write": false, 00:10:04.607 "abort": true, 00:10:04.607 "seek_hole": false, 00:10:04.607 "seek_data": false, 00:10:04.607 "copy": true, 00:10:04.607 "nvme_iov_md": false 00:10:04.607 }, 00:10:04.607 "memory_domains": [ 00:10:04.607 { 00:10:04.607 "dma_device_id": "system", 00:10:04.607 "dma_device_type": 1 00:10:04.607 }, 00:10:04.607 { 00:10:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.607 "dma_device_type": 2 00:10:04.607 } 00:10:04.607 ], 00:10:04.607 "driver_specific": {} 00:10:04.607 } 00:10:04.607 ] 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.607 17:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.607 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.607 "name": "Existed_Raid", 00:10:04.607 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:04.607 "strip_size_kb": 64, 00:10:04.607 "state": "online", 00:10:04.607 "raid_level": "raid0", 00:10:04.607 "superblock": true, 00:10:04.607 "num_base_bdevs": 4, 00:10:04.607 "num_base_bdevs_discovered": 4, 00:10:04.607 "num_base_bdevs_operational": 4, 00:10:04.607 "base_bdevs_list": [ 00:10:04.607 { 00:10:04.607 "name": "BaseBdev1", 00:10:04.607 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:04.607 "is_configured": true, 00:10:04.607 "data_offset": 2048, 00:10:04.607 "data_size": 63488 00:10:04.607 }, 00:10:04.607 { 00:10:04.607 "name": "BaseBdev2", 00:10:04.607 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:04.607 "is_configured": true, 00:10:04.607 "data_offset": 2048, 00:10:04.607 "data_size": 63488 00:10:04.607 }, 00:10:04.607 { 00:10:04.607 "name": "BaseBdev3", 00:10:04.607 "uuid": "97c21bbc-013e-462a-98e7-52aa0ef97400", 00:10:04.607 "is_configured": true, 00:10:04.607 "data_offset": 2048, 00:10:04.607 "data_size": 63488 00:10:04.607 }, 00:10:04.607 { 00:10:04.607 "name": "BaseBdev4", 00:10:04.607 "uuid": "92432d8d-26f4-42b7-8236-5f9adc0acfdb", 00:10:04.607 "is_configured": true, 00:10:04.607 "data_offset": 2048, 00:10:04.607 "data_size": 63488 00:10:04.607 } 00:10:04.607 ] 00:10:04.607 }' 00:10:04.607 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.607 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.178 [2024-10-25 17:51:23.372753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.178 "name": "Existed_Raid", 00:10:05.178 "aliases": [ 00:10:05.178 "c139164e-8ad5-4423-93fb-d5b717bd076d" 00:10:05.178 ], 00:10:05.178 "product_name": "Raid Volume", 00:10:05.178 "block_size": 512, 00:10:05.178 "num_blocks": 253952, 00:10:05.178 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:05.178 "assigned_rate_limits": { 00:10:05.178 "rw_ios_per_sec": 0, 00:10:05.178 "rw_mbytes_per_sec": 0, 00:10:05.178 "r_mbytes_per_sec": 0, 00:10:05.178 "w_mbytes_per_sec": 0 00:10:05.178 }, 00:10:05.178 "claimed": false, 00:10:05.178 "zoned": false, 00:10:05.178 "supported_io_types": { 00:10:05.178 "read": true, 00:10:05.178 "write": true, 00:10:05.178 "unmap": true, 00:10:05.178 "flush": true, 00:10:05.178 "reset": true, 00:10:05.178 "nvme_admin": false, 00:10:05.178 "nvme_io": false, 00:10:05.178 "nvme_io_md": false, 00:10:05.178 "write_zeroes": true, 00:10:05.178 "zcopy": false, 00:10:05.178 "get_zone_info": false, 00:10:05.178 "zone_management": false, 00:10:05.178 "zone_append": false, 00:10:05.178 "compare": false, 00:10:05.178 "compare_and_write": false, 00:10:05.178 "abort": false, 00:10:05.178 "seek_hole": false, 00:10:05.178 "seek_data": false, 00:10:05.178 "copy": false, 00:10:05.178 "nvme_iov_md": false 00:10:05.178 }, 00:10:05.178 "memory_domains": [ 00:10:05.178 { 00:10:05.178 "dma_device_id": "system", 00:10:05.178 "dma_device_type": 1 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.178 "dma_device_type": 2 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "system", 00:10:05.178 "dma_device_type": 1 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.178 "dma_device_type": 2 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "system", 00:10:05.178 "dma_device_type": 1 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.178 "dma_device_type": 2 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "system", 00:10:05.178 "dma_device_type": 1 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.178 "dma_device_type": 2 00:10:05.178 } 00:10:05.178 ], 00:10:05.178 "driver_specific": { 00:10:05.178 "raid": { 00:10:05.178 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:05.178 "strip_size_kb": 64, 00:10:05.178 "state": "online", 00:10:05.178 "raid_level": "raid0", 00:10:05.178 "superblock": true, 00:10:05.178 "num_base_bdevs": 4, 00:10:05.178 "num_base_bdevs_discovered": 4, 00:10:05.178 "num_base_bdevs_operational": 4, 00:10:05.178 "base_bdevs_list": [ 00:10:05.178 { 00:10:05.178 "name": "BaseBdev1", 00:10:05.178 "uuid": "8091e807-4e96-4cb6-9c4a-c0386da952de", 00:10:05.178 "is_configured": true, 00:10:05.178 "data_offset": 2048, 00:10:05.178 "data_size": 63488 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "name": "BaseBdev2", 00:10:05.178 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:05.178 "is_configured": true, 00:10:05.178 "data_offset": 2048, 00:10:05.178 "data_size": 63488 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "name": "BaseBdev3", 00:10:05.178 "uuid": "97c21bbc-013e-462a-98e7-52aa0ef97400", 00:10:05.178 "is_configured": true, 00:10:05.178 "data_offset": 2048, 00:10:05.178 "data_size": 63488 00:10:05.178 }, 00:10:05.178 { 00:10:05.178 "name": "BaseBdev4", 00:10:05.178 "uuid": "92432d8d-26f4-42b7-8236-5f9adc0acfdb", 00:10:05.178 "is_configured": true, 00:10:05.178 "data_offset": 2048, 00:10:05.178 "data_size": 63488 00:10:05.178 } 00:10:05.178 ] 00:10:05.178 } 00:10:05.178 } 00:10:05.178 }' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.178 BaseBdev2 00:10:05.178 BaseBdev3 00:10:05.178 BaseBdev4' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.178 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.179 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.439 [2024-10-25 17:51:23.647991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.439 [2024-10-25 17:51:23.648022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.439 [2024-10-25 17:51:23.648077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.439 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.440 "name": "Existed_Raid", 00:10:05.440 "uuid": "c139164e-8ad5-4423-93fb-d5b717bd076d", 00:10:05.440 "strip_size_kb": 64, 00:10:05.440 "state": "offline", 00:10:05.440 "raid_level": "raid0", 00:10:05.440 "superblock": true, 00:10:05.440 "num_base_bdevs": 4, 00:10:05.440 "num_base_bdevs_discovered": 3, 00:10:05.440 "num_base_bdevs_operational": 3, 00:10:05.440 "base_bdevs_list": [ 00:10:05.440 { 00:10:05.440 "name": null, 00:10:05.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.440 "is_configured": false, 00:10:05.440 "data_offset": 0, 00:10:05.440 "data_size": 63488 00:10:05.440 }, 00:10:05.440 { 00:10:05.440 "name": "BaseBdev2", 00:10:05.440 "uuid": "bb664f3b-5f5d-4c70-a17d-bc9a91e986f8", 00:10:05.440 "is_configured": true, 00:10:05.440 "data_offset": 2048, 00:10:05.440 "data_size": 63488 00:10:05.440 }, 00:10:05.440 { 00:10:05.440 "name": "BaseBdev3", 00:10:05.440 "uuid": "97c21bbc-013e-462a-98e7-52aa0ef97400", 00:10:05.440 "is_configured": true, 00:10:05.440 "data_offset": 2048, 00:10:05.440 "data_size": 63488 00:10:05.440 }, 00:10:05.440 { 00:10:05.440 "name": "BaseBdev4", 00:10:05.440 "uuid": "92432d8d-26f4-42b7-8236-5f9adc0acfdb", 00:10:05.440 "is_configured": true, 00:10:05.440 "data_offset": 2048, 00:10:05.440 "data_size": 63488 00:10:05.440 } 00:10:05.440 ] 00:10:05.440 }' 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.440 17:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.010 [2024-10-25 17:51:24.239345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.010 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.010 [2024-10-25 17:51:24.391863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 [2024-10-25 17:51:24.543870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.271 [2024-10-25 17:51:24.543916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.271 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.532 BaseBdev2 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.532 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.532 [ 00:10:06.532 { 00:10:06.532 "name": "BaseBdev2", 00:10:06.532 "aliases": [ 00:10:06.532 "0738d5a3-cec1-4514-b7e1-559698ad3a7e" 00:10:06.532 ], 00:10:06.532 "product_name": "Malloc disk", 00:10:06.532 "block_size": 512, 00:10:06.532 "num_blocks": 65536, 00:10:06.532 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:06.532 "assigned_rate_limits": { 00:10:06.532 "rw_ios_per_sec": 0, 00:10:06.532 "rw_mbytes_per_sec": 0, 00:10:06.532 "r_mbytes_per_sec": 0, 00:10:06.532 "w_mbytes_per_sec": 0 00:10:06.532 }, 00:10:06.532 "claimed": false, 00:10:06.532 "zoned": false, 00:10:06.532 "supported_io_types": { 00:10:06.532 "read": true, 00:10:06.532 "write": true, 00:10:06.532 "unmap": true, 00:10:06.532 "flush": true, 00:10:06.532 "reset": true, 00:10:06.532 "nvme_admin": false, 00:10:06.532 "nvme_io": false, 00:10:06.532 "nvme_io_md": false, 00:10:06.532 "write_zeroes": true, 00:10:06.532 "zcopy": true, 00:10:06.532 "get_zone_info": false, 00:10:06.532 "zone_management": false, 00:10:06.532 "zone_append": false, 00:10:06.532 "compare": false, 00:10:06.532 "compare_and_write": false, 00:10:06.532 "abort": true, 00:10:06.532 "seek_hole": false, 00:10:06.532 "seek_data": false, 00:10:06.532 "copy": true, 00:10:06.532 "nvme_iov_md": false 00:10:06.532 }, 00:10:06.532 "memory_domains": [ 00:10:06.532 { 00:10:06.532 "dma_device_id": "system", 00:10:06.532 "dma_device_type": 1 00:10:06.532 }, 00:10:06.532 { 00:10:06.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.532 "dma_device_type": 2 00:10:06.532 } 00:10:06.532 ], 00:10:06.532 "driver_specific": {} 00:10:06.532 } 00:10:06.532 ] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 BaseBdev3 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 [ 00:10:06.533 { 00:10:06.533 "name": "BaseBdev3", 00:10:06.533 "aliases": [ 00:10:06.533 "aadd3d33-2017-4d3d-94dd-dc89fc92074e" 00:10:06.533 ], 00:10:06.533 "product_name": "Malloc disk", 00:10:06.533 "block_size": 512, 00:10:06.533 "num_blocks": 65536, 00:10:06.533 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:06.533 "assigned_rate_limits": { 00:10:06.533 "rw_ios_per_sec": 0, 00:10:06.533 "rw_mbytes_per_sec": 0, 00:10:06.533 "r_mbytes_per_sec": 0, 00:10:06.533 "w_mbytes_per_sec": 0 00:10:06.533 }, 00:10:06.533 "claimed": false, 00:10:06.533 "zoned": false, 00:10:06.533 "supported_io_types": { 00:10:06.533 "read": true, 00:10:06.533 "write": true, 00:10:06.533 "unmap": true, 00:10:06.533 "flush": true, 00:10:06.533 "reset": true, 00:10:06.533 "nvme_admin": false, 00:10:06.533 "nvme_io": false, 00:10:06.533 "nvme_io_md": false, 00:10:06.533 "write_zeroes": true, 00:10:06.533 "zcopy": true, 00:10:06.533 "get_zone_info": false, 00:10:06.533 "zone_management": false, 00:10:06.533 "zone_append": false, 00:10:06.533 "compare": false, 00:10:06.533 "compare_and_write": false, 00:10:06.533 "abort": true, 00:10:06.533 "seek_hole": false, 00:10:06.533 "seek_data": false, 00:10:06.533 "copy": true, 00:10:06.533 "nvme_iov_md": false 00:10:06.533 }, 00:10:06.533 "memory_domains": [ 00:10:06.533 { 00:10:06.533 "dma_device_id": "system", 00:10:06.533 "dma_device_type": 1 00:10:06.533 }, 00:10:06.533 { 00:10:06.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.533 "dma_device_type": 2 00:10:06.533 } 00:10:06.533 ], 00:10:06.533 "driver_specific": {} 00:10:06.533 } 00:10:06.533 ] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 BaseBdev4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 [ 00:10:06.533 { 00:10:06.533 "name": "BaseBdev4", 00:10:06.533 "aliases": [ 00:10:06.533 "1d6fc9f3-2453-439c-8614-cc91c5f47d34" 00:10:06.533 ], 00:10:06.533 "product_name": "Malloc disk", 00:10:06.533 "block_size": 512, 00:10:06.533 "num_blocks": 65536, 00:10:06.533 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:06.533 "assigned_rate_limits": { 00:10:06.533 "rw_ios_per_sec": 0, 00:10:06.533 "rw_mbytes_per_sec": 0, 00:10:06.533 "r_mbytes_per_sec": 0, 00:10:06.533 "w_mbytes_per_sec": 0 00:10:06.533 }, 00:10:06.533 "claimed": false, 00:10:06.533 "zoned": false, 00:10:06.533 "supported_io_types": { 00:10:06.533 "read": true, 00:10:06.533 "write": true, 00:10:06.533 "unmap": true, 00:10:06.533 "flush": true, 00:10:06.533 "reset": true, 00:10:06.533 "nvme_admin": false, 00:10:06.533 "nvme_io": false, 00:10:06.533 "nvme_io_md": false, 00:10:06.533 "write_zeroes": true, 00:10:06.533 "zcopy": true, 00:10:06.533 "get_zone_info": false, 00:10:06.533 "zone_management": false, 00:10:06.533 "zone_append": false, 00:10:06.533 "compare": false, 00:10:06.533 "compare_and_write": false, 00:10:06.533 "abort": true, 00:10:06.533 "seek_hole": false, 00:10:06.533 "seek_data": false, 00:10:06.533 "copy": true, 00:10:06.533 "nvme_iov_md": false 00:10:06.533 }, 00:10:06.533 "memory_domains": [ 00:10:06.533 { 00:10:06.533 "dma_device_id": "system", 00:10:06.533 "dma_device_type": 1 00:10:06.533 }, 00:10:06.533 { 00:10:06.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.533 "dma_device_type": 2 00:10:06.533 } 00:10:06.533 ], 00:10:06.533 "driver_specific": {} 00:10:06.533 } 00:10:06.533 ] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 [2024-10-25 17:51:24.916703] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.533 [2024-10-25 17:51:24.916749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.533 [2024-10-25 17:51:24.916770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.533 [2024-10-25 17:51:24.918501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.533 [2024-10-25 17:51:24.918555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.533 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.794 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.794 "name": "Existed_Raid", 00:10:06.794 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:06.794 "strip_size_kb": 64, 00:10:06.794 "state": "configuring", 00:10:06.794 "raid_level": "raid0", 00:10:06.794 "superblock": true, 00:10:06.794 "num_base_bdevs": 4, 00:10:06.794 "num_base_bdevs_discovered": 3, 00:10:06.794 "num_base_bdevs_operational": 4, 00:10:06.794 "base_bdevs_list": [ 00:10:06.794 { 00:10:06.794 "name": "BaseBdev1", 00:10:06.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.794 "is_configured": false, 00:10:06.794 "data_offset": 0, 00:10:06.794 "data_size": 0 00:10:06.794 }, 00:10:06.794 { 00:10:06.794 "name": "BaseBdev2", 00:10:06.794 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:06.794 "is_configured": true, 00:10:06.794 "data_offset": 2048, 00:10:06.794 "data_size": 63488 00:10:06.794 }, 00:10:06.794 { 00:10:06.794 "name": "BaseBdev3", 00:10:06.794 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:06.794 "is_configured": true, 00:10:06.794 "data_offset": 2048, 00:10:06.794 "data_size": 63488 00:10:06.794 }, 00:10:06.794 { 00:10:06.794 "name": "BaseBdev4", 00:10:06.794 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:06.794 "is_configured": true, 00:10:06.794 "data_offset": 2048, 00:10:06.794 "data_size": 63488 00:10:06.794 } 00:10:06.794 ] 00:10:06.794 }' 00:10:06.794 17:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.794 17:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.054 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 [2024-10-25 17:51:25.367947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.055 "name": "Existed_Raid", 00:10:07.055 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:07.055 "strip_size_kb": 64, 00:10:07.055 "state": "configuring", 00:10:07.055 "raid_level": "raid0", 00:10:07.055 "superblock": true, 00:10:07.055 "num_base_bdevs": 4, 00:10:07.055 "num_base_bdevs_discovered": 2, 00:10:07.055 "num_base_bdevs_operational": 4, 00:10:07.055 "base_bdevs_list": [ 00:10:07.055 { 00:10:07.055 "name": "BaseBdev1", 00:10:07.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.055 "is_configured": false, 00:10:07.055 "data_offset": 0, 00:10:07.055 "data_size": 0 00:10:07.055 }, 00:10:07.055 { 00:10:07.055 "name": null, 00:10:07.055 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:07.055 "is_configured": false, 00:10:07.055 "data_offset": 0, 00:10:07.055 "data_size": 63488 00:10:07.055 }, 00:10:07.055 { 00:10:07.055 "name": "BaseBdev3", 00:10:07.055 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:07.055 "is_configured": true, 00:10:07.055 "data_offset": 2048, 00:10:07.055 "data_size": 63488 00:10:07.055 }, 00:10:07.055 { 00:10:07.055 "name": "BaseBdev4", 00:10:07.055 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:07.055 "is_configured": true, 00:10:07.055 "data_offset": 2048, 00:10:07.055 "data_size": 63488 00:10:07.055 } 00:10:07.055 ] 00:10:07.055 }' 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.055 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.624 [2024-10-25 17:51:25.936044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.624 BaseBdev1 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.624 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 [ 00:10:07.625 { 00:10:07.625 "name": "BaseBdev1", 00:10:07.625 "aliases": [ 00:10:07.625 "0f74caef-1a43-429e-b064-bff9f802d75f" 00:10:07.625 ], 00:10:07.625 "product_name": "Malloc disk", 00:10:07.625 "block_size": 512, 00:10:07.625 "num_blocks": 65536, 00:10:07.625 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:07.625 "assigned_rate_limits": { 00:10:07.625 "rw_ios_per_sec": 0, 00:10:07.625 "rw_mbytes_per_sec": 0, 00:10:07.625 "r_mbytes_per_sec": 0, 00:10:07.625 "w_mbytes_per_sec": 0 00:10:07.625 }, 00:10:07.625 "claimed": true, 00:10:07.625 "claim_type": "exclusive_write", 00:10:07.625 "zoned": false, 00:10:07.625 "supported_io_types": { 00:10:07.625 "read": true, 00:10:07.625 "write": true, 00:10:07.625 "unmap": true, 00:10:07.625 "flush": true, 00:10:07.625 "reset": true, 00:10:07.625 "nvme_admin": false, 00:10:07.625 "nvme_io": false, 00:10:07.625 "nvme_io_md": false, 00:10:07.625 "write_zeroes": true, 00:10:07.625 "zcopy": true, 00:10:07.625 "get_zone_info": false, 00:10:07.625 "zone_management": false, 00:10:07.625 "zone_append": false, 00:10:07.625 "compare": false, 00:10:07.625 "compare_and_write": false, 00:10:07.625 "abort": true, 00:10:07.625 "seek_hole": false, 00:10:07.625 "seek_data": false, 00:10:07.625 "copy": true, 00:10:07.625 "nvme_iov_md": false 00:10:07.625 }, 00:10:07.625 "memory_domains": [ 00:10:07.625 { 00:10:07.625 "dma_device_id": "system", 00:10:07.625 "dma_device_type": 1 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.625 "dma_device_type": 2 00:10:07.625 } 00:10:07.625 ], 00:10:07.625 "driver_specific": {} 00:10:07.625 } 00:10:07.625 ] 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.625 17:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.625 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.625 "name": "Existed_Raid", 00:10:07.625 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:07.625 "strip_size_kb": 64, 00:10:07.625 "state": "configuring", 00:10:07.625 "raid_level": "raid0", 00:10:07.625 "superblock": true, 00:10:07.625 "num_base_bdevs": 4, 00:10:07.625 "num_base_bdevs_discovered": 3, 00:10:07.625 "num_base_bdevs_operational": 4, 00:10:07.625 "base_bdevs_list": [ 00:10:07.625 { 00:10:07.625 "name": "BaseBdev1", 00:10:07.625 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:07.625 "is_configured": true, 00:10:07.625 "data_offset": 2048, 00:10:07.625 "data_size": 63488 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "name": null, 00:10:07.625 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:07.625 "is_configured": false, 00:10:07.625 "data_offset": 0, 00:10:07.625 "data_size": 63488 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "name": "BaseBdev3", 00:10:07.625 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:07.625 "is_configured": true, 00:10:07.625 "data_offset": 2048, 00:10:07.625 "data_size": 63488 00:10:07.625 }, 00:10:07.625 { 00:10:07.625 "name": "BaseBdev4", 00:10:07.625 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:07.625 "is_configured": true, 00:10:07.625 "data_offset": 2048, 00:10:07.625 "data_size": 63488 00:10:07.625 } 00:10:07.625 ] 00:10:07.625 }' 00:10:07.625 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.625 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.197 [2024-10-25 17:51:26.455233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.197 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.197 "name": "Existed_Raid", 00:10:08.197 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:08.197 "strip_size_kb": 64, 00:10:08.197 "state": "configuring", 00:10:08.197 "raid_level": "raid0", 00:10:08.197 "superblock": true, 00:10:08.197 "num_base_bdevs": 4, 00:10:08.197 "num_base_bdevs_discovered": 2, 00:10:08.197 "num_base_bdevs_operational": 4, 00:10:08.197 "base_bdevs_list": [ 00:10:08.197 { 00:10:08.197 "name": "BaseBdev1", 00:10:08.198 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:08.198 "is_configured": true, 00:10:08.198 "data_offset": 2048, 00:10:08.198 "data_size": 63488 00:10:08.198 }, 00:10:08.198 { 00:10:08.198 "name": null, 00:10:08.198 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:08.198 "is_configured": false, 00:10:08.198 "data_offset": 0, 00:10:08.198 "data_size": 63488 00:10:08.198 }, 00:10:08.198 { 00:10:08.198 "name": null, 00:10:08.198 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:08.198 "is_configured": false, 00:10:08.198 "data_offset": 0, 00:10:08.198 "data_size": 63488 00:10:08.198 }, 00:10:08.198 { 00:10:08.198 "name": "BaseBdev4", 00:10:08.198 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:08.198 "is_configured": true, 00:10:08.198 "data_offset": 2048, 00:10:08.198 "data_size": 63488 00:10:08.198 } 00:10:08.198 ] 00:10:08.198 }' 00:10:08.198 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.198 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.771 [2024-10-25 17:51:26.954388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.771 17:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.771 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.771 "name": "Existed_Raid", 00:10:08.771 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:08.771 "strip_size_kb": 64, 00:10:08.771 "state": "configuring", 00:10:08.771 "raid_level": "raid0", 00:10:08.771 "superblock": true, 00:10:08.771 "num_base_bdevs": 4, 00:10:08.771 "num_base_bdevs_discovered": 3, 00:10:08.771 "num_base_bdevs_operational": 4, 00:10:08.771 "base_bdevs_list": [ 00:10:08.771 { 00:10:08.771 "name": "BaseBdev1", 00:10:08.771 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:08.771 "is_configured": true, 00:10:08.771 "data_offset": 2048, 00:10:08.771 "data_size": 63488 00:10:08.771 }, 00:10:08.771 { 00:10:08.771 "name": null, 00:10:08.771 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:08.771 "is_configured": false, 00:10:08.771 "data_offset": 0, 00:10:08.771 "data_size": 63488 00:10:08.771 }, 00:10:08.771 { 00:10:08.771 "name": "BaseBdev3", 00:10:08.771 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:08.771 "is_configured": true, 00:10:08.771 "data_offset": 2048, 00:10:08.771 "data_size": 63488 00:10:08.771 }, 00:10:08.771 { 00:10:08.771 "name": "BaseBdev4", 00:10:08.771 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:08.771 "is_configured": true, 00:10:08.771 "data_offset": 2048, 00:10:08.771 "data_size": 63488 00:10:08.771 } 00:10:08.771 ] 00:10:08.771 }' 00:10:08.771 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.771 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.031 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.031 [2024-10-25 17:51:27.429598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.291 "name": "Existed_Raid", 00:10:09.291 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:09.291 "strip_size_kb": 64, 00:10:09.291 "state": "configuring", 00:10:09.291 "raid_level": "raid0", 00:10:09.291 "superblock": true, 00:10:09.291 "num_base_bdevs": 4, 00:10:09.291 "num_base_bdevs_discovered": 2, 00:10:09.291 "num_base_bdevs_operational": 4, 00:10:09.291 "base_bdevs_list": [ 00:10:09.291 { 00:10:09.291 "name": null, 00:10:09.291 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:09.291 "is_configured": false, 00:10:09.291 "data_offset": 0, 00:10:09.291 "data_size": 63488 00:10:09.291 }, 00:10:09.291 { 00:10:09.291 "name": null, 00:10:09.291 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:09.291 "is_configured": false, 00:10:09.291 "data_offset": 0, 00:10:09.291 "data_size": 63488 00:10:09.291 }, 00:10:09.291 { 00:10:09.291 "name": "BaseBdev3", 00:10:09.291 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:09.291 "is_configured": true, 00:10:09.291 "data_offset": 2048, 00:10:09.291 "data_size": 63488 00:10:09.291 }, 00:10:09.291 { 00:10:09.291 "name": "BaseBdev4", 00:10:09.291 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:09.291 "is_configured": true, 00:10:09.291 "data_offset": 2048, 00:10:09.291 "data_size": 63488 00:10:09.291 } 00:10:09.291 ] 00:10:09.291 }' 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.291 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.551 [2024-10-25 17:51:27.977680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.551 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.810 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.810 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.810 17:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.810 17:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.810 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.810 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.810 "name": "Existed_Raid", 00:10:09.810 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:09.810 "strip_size_kb": 64, 00:10:09.810 "state": "configuring", 00:10:09.810 "raid_level": "raid0", 00:10:09.810 "superblock": true, 00:10:09.810 "num_base_bdevs": 4, 00:10:09.810 "num_base_bdevs_discovered": 3, 00:10:09.810 "num_base_bdevs_operational": 4, 00:10:09.810 "base_bdevs_list": [ 00:10:09.810 { 00:10:09.810 "name": null, 00:10:09.810 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:09.810 "is_configured": false, 00:10:09.810 "data_offset": 0, 00:10:09.810 "data_size": 63488 00:10:09.810 }, 00:10:09.810 { 00:10:09.810 "name": "BaseBdev2", 00:10:09.810 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 }, 00:10:09.810 { 00:10:09.810 "name": "BaseBdev3", 00:10:09.810 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 }, 00:10:09.810 { 00:10:09.810 "name": "BaseBdev4", 00:10:09.810 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:09.810 "is_configured": true, 00:10:09.810 "data_offset": 2048, 00:10:09.810 "data_size": 63488 00:10:09.810 } 00:10:09.810 ] 00:10:09.810 }' 00:10:09.810 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.810 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f74caef-1a43-429e-b064-bff9f802d75f 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 [2024-10-25 17:51:28.453301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.073 [2024-10-25 17:51:28.453534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.073 [2024-10-25 17:51:28.453546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.073 [2024-10-25 17:51:28.453806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:10.073 [2024-10-25 17:51:28.453958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.073 [2024-10-25 17:51:28.453975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:10.073 [2024-10-25 17:51:28.454093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.073 NewBaseBdev 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.073 [ 00:10:10.073 { 00:10:10.073 "name": "NewBaseBdev", 00:10:10.073 "aliases": [ 00:10:10.073 "0f74caef-1a43-429e-b064-bff9f802d75f" 00:10:10.073 ], 00:10:10.073 "product_name": "Malloc disk", 00:10:10.073 "block_size": 512, 00:10:10.073 "num_blocks": 65536, 00:10:10.073 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:10.073 "assigned_rate_limits": { 00:10:10.073 "rw_ios_per_sec": 0, 00:10:10.073 "rw_mbytes_per_sec": 0, 00:10:10.073 "r_mbytes_per_sec": 0, 00:10:10.073 "w_mbytes_per_sec": 0 00:10:10.073 }, 00:10:10.073 "claimed": true, 00:10:10.073 "claim_type": "exclusive_write", 00:10:10.073 "zoned": false, 00:10:10.073 "supported_io_types": { 00:10:10.073 "read": true, 00:10:10.073 "write": true, 00:10:10.073 "unmap": true, 00:10:10.073 "flush": true, 00:10:10.073 "reset": true, 00:10:10.073 "nvme_admin": false, 00:10:10.073 "nvme_io": false, 00:10:10.073 "nvme_io_md": false, 00:10:10.073 "write_zeroes": true, 00:10:10.073 "zcopy": true, 00:10:10.073 "get_zone_info": false, 00:10:10.073 "zone_management": false, 00:10:10.073 "zone_append": false, 00:10:10.073 "compare": false, 00:10:10.073 "compare_and_write": false, 00:10:10.073 "abort": true, 00:10:10.073 "seek_hole": false, 00:10:10.073 "seek_data": false, 00:10:10.073 "copy": true, 00:10:10.073 "nvme_iov_md": false 00:10:10.073 }, 00:10:10.073 "memory_domains": [ 00:10:10.073 { 00:10:10.073 "dma_device_id": "system", 00:10:10.073 "dma_device_type": 1 00:10:10.073 }, 00:10:10.073 { 00:10:10.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.073 "dma_device_type": 2 00:10:10.073 } 00:10:10.073 ], 00:10:10.073 "driver_specific": {} 00:10:10.073 } 00:10:10.073 ] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.073 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.332 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.332 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.332 "name": "Existed_Raid", 00:10:10.332 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:10.332 "strip_size_kb": 64, 00:10:10.332 "state": "online", 00:10:10.332 "raid_level": "raid0", 00:10:10.332 "superblock": true, 00:10:10.332 "num_base_bdevs": 4, 00:10:10.332 "num_base_bdevs_discovered": 4, 00:10:10.332 "num_base_bdevs_operational": 4, 00:10:10.332 "base_bdevs_list": [ 00:10:10.332 { 00:10:10.332 "name": "NewBaseBdev", 00:10:10.332 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:10.332 "is_configured": true, 00:10:10.332 "data_offset": 2048, 00:10:10.332 "data_size": 63488 00:10:10.332 }, 00:10:10.332 { 00:10:10.332 "name": "BaseBdev2", 00:10:10.332 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:10.332 "is_configured": true, 00:10:10.332 "data_offset": 2048, 00:10:10.332 "data_size": 63488 00:10:10.332 }, 00:10:10.332 { 00:10:10.332 "name": "BaseBdev3", 00:10:10.332 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:10.332 "is_configured": true, 00:10:10.332 "data_offset": 2048, 00:10:10.332 "data_size": 63488 00:10:10.332 }, 00:10:10.332 { 00:10:10.332 "name": "BaseBdev4", 00:10:10.332 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:10.332 "is_configured": true, 00:10:10.332 "data_offset": 2048, 00:10:10.332 "data_size": 63488 00:10:10.332 } 00:10:10.332 ] 00:10:10.332 }' 00:10:10.332 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.332 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.591 [2024-10-25 17:51:28.956846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.591 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.591 "name": "Existed_Raid", 00:10:10.591 "aliases": [ 00:10:10.591 "ba1b3335-0953-4741-9410-d0b4ff590f50" 00:10:10.591 ], 00:10:10.591 "product_name": "Raid Volume", 00:10:10.591 "block_size": 512, 00:10:10.591 "num_blocks": 253952, 00:10:10.591 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:10.591 "assigned_rate_limits": { 00:10:10.591 "rw_ios_per_sec": 0, 00:10:10.591 "rw_mbytes_per_sec": 0, 00:10:10.591 "r_mbytes_per_sec": 0, 00:10:10.591 "w_mbytes_per_sec": 0 00:10:10.591 }, 00:10:10.591 "claimed": false, 00:10:10.591 "zoned": false, 00:10:10.591 "supported_io_types": { 00:10:10.591 "read": true, 00:10:10.591 "write": true, 00:10:10.591 "unmap": true, 00:10:10.591 "flush": true, 00:10:10.591 "reset": true, 00:10:10.591 "nvme_admin": false, 00:10:10.591 "nvme_io": false, 00:10:10.591 "nvme_io_md": false, 00:10:10.591 "write_zeroes": true, 00:10:10.591 "zcopy": false, 00:10:10.591 "get_zone_info": false, 00:10:10.591 "zone_management": false, 00:10:10.591 "zone_append": false, 00:10:10.591 "compare": false, 00:10:10.591 "compare_and_write": false, 00:10:10.591 "abort": false, 00:10:10.591 "seek_hole": false, 00:10:10.591 "seek_data": false, 00:10:10.591 "copy": false, 00:10:10.591 "nvme_iov_md": false 00:10:10.591 }, 00:10:10.591 "memory_domains": [ 00:10:10.591 { 00:10:10.591 "dma_device_id": "system", 00:10:10.591 "dma_device_type": 1 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.591 "dma_device_type": 2 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "system", 00:10:10.591 "dma_device_type": 1 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.591 "dma_device_type": 2 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "system", 00:10:10.591 "dma_device_type": 1 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.591 "dma_device_type": 2 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "system", 00:10:10.591 "dma_device_type": 1 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.591 "dma_device_type": 2 00:10:10.591 } 00:10:10.591 ], 00:10:10.591 "driver_specific": { 00:10:10.591 "raid": { 00:10:10.591 "uuid": "ba1b3335-0953-4741-9410-d0b4ff590f50", 00:10:10.591 "strip_size_kb": 64, 00:10:10.591 "state": "online", 00:10:10.591 "raid_level": "raid0", 00:10:10.591 "superblock": true, 00:10:10.591 "num_base_bdevs": 4, 00:10:10.591 "num_base_bdevs_discovered": 4, 00:10:10.591 "num_base_bdevs_operational": 4, 00:10:10.591 "base_bdevs_list": [ 00:10:10.591 { 00:10:10.591 "name": "NewBaseBdev", 00:10:10.591 "uuid": "0f74caef-1a43-429e-b064-bff9f802d75f", 00:10:10.591 "is_configured": true, 00:10:10.591 "data_offset": 2048, 00:10:10.591 "data_size": 63488 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "name": "BaseBdev2", 00:10:10.591 "uuid": "0738d5a3-cec1-4514-b7e1-559698ad3a7e", 00:10:10.591 "is_configured": true, 00:10:10.591 "data_offset": 2048, 00:10:10.591 "data_size": 63488 00:10:10.591 }, 00:10:10.591 { 00:10:10.591 "name": "BaseBdev3", 00:10:10.591 "uuid": "aadd3d33-2017-4d3d-94dd-dc89fc92074e", 00:10:10.591 "is_configured": true, 00:10:10.591 "data_offset": 2048, 00:10:10.592 "data_size": 63488 00:10:10.592 }, 00:10:10.592 { 00:10:10.592 "name": "BaseBdev4", 00:10:10.592 "uuid": "1d6fc9f3-2453-439c-8614-cc91c5f47d34", 00:10:10.592 "is_configured": true, 00:10:10.592 "data_offset": 2048, 00:10:10.592 "data_size": 63488 00:10:10.592 } 00:10:10.592 ] 00:10:10.592 } 00:10:10.592 } 00:10:10.592 }' 00:10:10.592 17:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.851 BaseBdev2 00:10:10.851 BaseBdev3 00:10:10.851 BaseBdev4' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.851 [2024-10-25 17:51:29.239983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.851 [2024-10-25 17:51:29.240014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.851 [2024-10-25 17:51:29.240107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.851 [2024-10-25 17:51:29.240174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.851 [2024-10-25 17:51:29.240184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69787 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 69787 ']' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 69787 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69787 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.851 killing process with pid 69787 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69787' 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 69787 00:10:10.851 [2024-10-25 17:51:29.276818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.851 17:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 69787 00:10:11.420 [2024-10-25 17:51:29.651257] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.359 17:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.359 00:10:12.359 real 0m11.189s 00:10:12.359 user 0m17.822s 00:10:12.359 sys 0m1.985s 00:10:12.359 17:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.359 17:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.359 ************************************ 00:10:12.359 END TEST raid_state_function_test_sb 00:10:12.359 ************************************ 00:10:12.359 17:51:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:12.359 17:51:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:12.359 17:51:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.359 17:51:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.618 ************************************ 00:10:12.618 START TEST raid_superblock_test 00:10:12.618 ************************************ 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70452 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70452 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70452 ']' 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.618 17:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.618 [2024-10-25 17:51:30.907168] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:12.618 [2024-10-25 17:51:30.907292] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70452 ] 00:10:12.877 [2024-10-25 17:51:31.086930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.877 [2024-10-25 17:51:31.190236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.137 [2024-10-25 17:51:31.376685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.137 [2024-10-25 17:51:31.376741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.396 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 malloc1 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 [2024-10-25 17:51:31.771982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.397 [2024-10-25 17:51:31.772074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.397 [2024-10-25 17:51:31.772096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:13.397 [2024-10-25 17:51:31.772105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.397 [2024-10-25 17:51:31.774124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.397 [2024-10-25 17:51:31.774156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.397 pt1 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 malloc2 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 [2024-10-25 17:51:31.825889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.397 [2024-10-25 17:51:31.825942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.397 [2024-10-25 17:51:31.825976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:13.397 [2024-10-25 17:51:31.825985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.397 [2024-10-25 17:51:31.827951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.397 [2024-10-25 17:51:31.827982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.397 pt2 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:13.397 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.663 malloc3 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.663 [2024-10-25 17:51:31.892504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.663 [2024-10-25 17:51:31.892561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.663 [2024-10-25 17:51:31.892581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:13.663 [2024-10-25 17:51:31.892590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.663 [2024-10-25 17:51:31.894830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.663 [2024-10-25 17:51:31.894876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.663 pt3 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.663 malloc4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.663 [2024-10-25 17:51:31.947342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:13.663 [2024-10-25 17:51:31.947416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.663 [2024-10-25 17:51:31.947437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:13.663 [2024-10-25 17:51:31.947446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.663 [2024-10-25 17:51:31.949544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.663 [2024-10-25 17:51:31.949580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:13.663 pt4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.663 [2024-10-25 17:51:31.959350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.663 [2024-10-25 17:51:31.961198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.663 [2024-10-25 17:51:31.961264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.663 [2024-10-25 17:51:31.961324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:13.663 [2024-10-25 17:51:31.961509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:13.663 [2024-10-25 17:51:31.961528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.663 [2024-10-25 17:51:31.961792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.663 [2024-10-25 17:51:31.961969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:13.663 [2024-10-25 17:51:31.961987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:13.663 [2024-10-25 17:51:31.962141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.663 17:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.664 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.664 17:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.664 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.664 "name": "raid_bdev1", 00:10:13.664 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:13.664 "strip_size_kb": 64, 00:10:13.664 "state": "online", 00:10:13.664 "raid_level": "raid0", 00:10:13.664 "superblock": true, 00:10:13.664 "num_base_bdevs": 4, 00:10:13.664 "num_base_bdevs_discovered": 4, 00:10:13.664 "num_base_bdevs_operational": 4, 00:10:13.664 "base_bdevs_list": [ 00:10:13.664 { 00:10:13.664 "name": "pt1", 00:10:13.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.664 "is_configured": true, 00:10:13.664 "data_offset": 2048, 00:10:13.664 "data_size": 63488 00:10:13.664 }, 00:10:13.664 { 00:10:13.664 "name": "pt2", 00:10:13.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.664 "is_configured": true, 00:10:13.664 "data_offset": 2048, 00:10:13.664 "data_size": 63488 00:10:13.664 }, 00:10:13.664 { 00:10:13.664 "name": "pt3", 00:10:13.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.664 "is_configured": true, 00:10:13.664 "data_offset": 2048, 00:10:13.664 "data_size": 63488 00:10:13.664 }, 00:10:13.664 { 00:10:13.664 "name": "pt4", 00:10:13.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.664 "is_configured": true, 00:10:13.664 "data_offset": 2048, 00:10:13.664 "data_size": 63488 00:10:13.664 } 00:10:13.664 ] 00:10:13.664 }' 00:10:13.664 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.664 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.932 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.932 [2024-10-25 17:51:32.366975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.193 "name": "raid_bdev1", 00:10:14.193 "aliases": [ 00:10:14.193 "da06102a-3e1b-4cd9-a71b-de211bfa1030" 00:10:14.193 ], 00:10:14.193 "product_name": "Raid Volume", 00:10:14.193 "block_size": 512, 00:10:14.193 "num_blocks": 253952, 00:10:14.193 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:14.193 "assigned_rate_limits": { 00:10:14.193 "rw_ios_per_sec": 0, 00:10:14.193 "rw_mbytes_per_sec": 0, 00:10:14.193 "r_mbytes_per_sec": 0, 00:10:14.193 "w_mbytes_per_sec": 0 00:10:14.193 }, 00:10:14.193 "claimed": false, 00:10:14.193 "zoned": false, 00:10:14.193 "supported_io_types": { 00:10:14.193 "read": true, 00:10:14.193 "write": true, 00:10:14.193 "unmap": true, 00:10:14.193 "flush": true, 00:10:14.193 "reset": true, 00:10:14.193 "nvme_admin": false, 00:10:14.193 "nvme_io": false, 00:10:14.193 "nvme_io_md": false, 00:10:14.193 "write_zeroes": true, 00:10:14.193 "zcopy": false, 00:10:14.193 "get_zone_info": false, 00:10:14.193 "zone_management": false, 00:10:14.193 "zone_append": false, 00:10:14.193 "compare": false, 00:10:14.193 "compare_and_write": false, 00:10:14.193 "abort": false, 00:10:14.193 "seek_hole": false, 00:10:14.193 "seek_data": false, 00:10:14.193 "copy": false, 00:10:14.193 "nvme_iov_md": false 00:10:14.193 }, 00:10:14.193 "memory_domains": [ 00:10:14.193 { 00:10:14.193 "dma_device_id": "system", 00:10:14.193 "dma_device_type": 1 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.193 "dma_device_type": 2 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "system", 00:10:14.193 "dma_device_type": 1 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.193 "dma_device_type": 2 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "system", 00:10:14.193 "dma_device_type": 1 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.193 "dma_device_type": 2 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "system", 00:10:14.193 "dma_device_type": 1 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.193 "dma_device_type": 2 00:10:14.193 } 00:10:14.193 ], 00:10:14.193 "driver_specific": { 00:10:14.193 "raid": { 00:10:14.193 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:14.193 "strip_size_kb": 64, 00:10:14.193 "state": "online", 00:10:14.193 "raid_level": "raid0", 00:10:14.193 "superblock": true, 00:10:14.193 "num_base_bdevs": 4, 00:10:14.193 "num_base_bdevs_discovered": 4, 00:10:14.193 "num_base_bdevs_operational": 4, 00:10:14.193 "base_bdevs_list": [ 00:10:14.193 { 00:10:14.193 "name": "pt1", 00:10:14.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.193 "is_configured": true, 00:10:14.193 "data_offset": 2048, 00:10:14.193 "data_size": 63488 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "name": "pt2", 00:10:14.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.193 "is_configured": true, 00:10:14.193 "data_offset": 2048, 00:10:14.193 "data_size": 63488 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "name": "pt3", 00:10:14.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.193 "is_configured": true, 00:10:14.193 "data_offset": 2048, 00:10:14.193 "data_size": 63488 00:10:14.193 }, 00:10:14.193 { 00:10:14.193 "name": "pt4", 00:10:14.193 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.193 "is_configured": true, 00:10:14.193 "data_offset": 2048, 00:10:14.193 "data_size": 63488 00:10:14.193 } 00:10:14.193 ] 00:10:14.193 } 00:10:14.193 } 00:10:14.193 }' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:14.193 pt2 00:10:14.193 pt3 00:10:14.193 pt4' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:14.193 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.194 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.194 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 [2024-10-25 17:51:32.658352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da06102a-3e1b-4cd9-a71b-de211bfa1030 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z da06102a-3e1b-4cd9-a71b-de211bfa1030 ']' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 [2024-10-25 17:51:32.702010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.454 [2024-10-25 17:51:32.702035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.454 [2024-10-25 17:51:32.702107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.454 [2024-10-25 17:51:32.702172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.454 [2024-10-25 17:51:32.702185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.454 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 [2024-10-25 17:51:32.841804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:14.455 [2024-10-25 17:51:32.843566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:14.455 [2024-10-25 17:51:32.843628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:14.455 [2024-10-25 17:51:32.843658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:14.455 [2024-10-25 17:51:32.843703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:14.455 [2024-10-25 17:51:32.843746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:14.455 [2024-10-25 17:51:32.843764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:14.455 [2024-10-25 17:51:32.843781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:14.455 [2024-10-25 17:51:32.843792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.455 [2024-10-25 17:51:32.843803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:14.455 request: 00:10:14.455 { 00:10:14.455 "name": "raid_bdev1", 00:10:14.455 "raid_level": "raid0", 00:10:14.455 "base_bdevs": [ 00:10:14.455 "malloc1", 00:10:14.455 "malloc2", 00:10:14.455 "malloc3", 00:10:14.455 "malloc4" 00:10:14.455 ], 00:10:14.455 "strip_size_kb": 64, 00:10:14.455 "superblock": false, 00:10:14.455 "method": "bdev_raid_create", 00:10:14.455 "req_id": 1 00:10:14.455 } 00:10:14.455 Got JSON-RPC error response 00:10:14.455 response: 00:10:14.455 { 00:10:14.455 "code": -17, 00:10:14.455 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:14.455 } 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:14.455 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.714 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.714 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.715 [2024-10-25 17:51:32.893675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.715 [2024-10-25 17:51:32.893737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.715 [2024-10-25 17:51:32.893751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:14.715 [2024-10-25 17:51:32.893761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.715 [2024-10-25 17:51:32.895787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.715 [2024-10-25 17:51:32.895837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.715 [2024-10-25 17:51:32.895901] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.715 [2024-10-25 17:51:32.895959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.715 pt1 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.715 "name": "raid_bdev1", 00:10:14.715 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:14.715 "strip_size_kb": 64, 00:10:14.715 "state": "configuring", 00:10:14.715 "raid_level": "raid0", 00:10:14.715 "superblock": true, 00:10:14.715 "num_base_bdevs": 4, 00:10:14.715 "num_base_bdevs_discovered": 1, 00:10:14.715 "num_base_bdevs_operational": 4, 00:10:14.715 "base_bdevs_list": [ 00:10:14.715 { 00:10:14.715 "name": "pt1", 00:10:14.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.715 "is_configured": true, 00:10:14.715 "data_offset": 2048, 00:10:14.715 "data_size": 63488 00:10:14.715 }, 00:10:14.715 { 00:10:14.715 "name": null, 00:10:14.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.715 "is_configured": false, 00:10:14.715 "data_offset": 2048, 00:10:14.715 "data_size": 63488 00:10:14.715 }, 00:10:14.715 { 00:10:14.715 "name": null, 00:10:14.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.715 "is_configured": false, 00:10:14.715 "data_offset": 2048, 00:10:14.715 "data_size": 63488 00:10:14.715 }, 00:10:14.715 { 00:10:14.715 "name": null, 00:10:14.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.715 "is_configured": false, 00:10:14.715 "data_offset": 2048, 00:10:14.715 "data_size": 63488 00:10:14.715 } 00:10:14.715 ] 00:10:14.715 }' 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.715 17:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.974 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.975 [2024-10-25 17:51:33.313020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.975 [2024-10-25 17:51:33.313102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.975 [2024-10-25 17:51:33.313122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:14.975 [2024-10-25 17:51:33.313135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.975 [2024-10-25 17:51:33.313621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.975 [2024-10-25 17:51:33.313652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.975 [2024-10-25 17:51:33.313740] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.975 [2024-10-25 17:51:33.313769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.975 pt2 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.975 [2024-10-25 17:51:33.324990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.975 "name": "raid_bdev1", 00:10:14.975 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:14.975 "strip_size_kb": 64, 00:10:14.975 "state": "configuring", 00:10:14.975 "raid_level": "raid0", 00:10:14.975 "superblock": true, 00:10:14.975 "num_base_bdevs": 4, 00:10:14.975 "num_base_bdevs_discovered": 1, 00:10:14.975 "num_base_bdevs_operational": 4, 00:10:14.975 "base_bdevs_list": [ 00:10:14.975 { 00:10:14.975 "name": "pt1", 00:10:14.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.975 "is_configured": true, 00:10:14.975 "data_offset": 2048, 00:10:14.975 "data_size": 63488 00:10:14.975 }, 00:10:14.975 { 00:10:14.975 "name": null, 00:10:14.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.975 "is_configured": false, 00:10:14.975 "data_offset": 0, 00:10:14.975 "data_size": 63488 00:10:14.975 }, 00:10:14.975 { 00:10:14.975 "name": null, 00:10:14.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.975 "is_configured": false, 00:10:14.975 "data_offset": 2048, 00:10:14.975 "data_size": 63488 00:10:14.975 }, 00:10:14.975 { 00:10:14.975 "name": null, 00:10:14.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.975 "is_configured": false, 00:10:14.975 "data_offset": 2048, 00:10:14.975 "data_size": 63488 00:10:14.975 } 00:10:14.975 ] 00:10:14.975 }' 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.975 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.545 [2024-10-25 17:51:33.744238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.545 [2024-10-25 17:51:33.744291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.545 [2024-10-25 17:51:33.744310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:15.545 [2024-10-25 17:51:33.744319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.545 [2024-10-25 17:51:33.744753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.545 [2024-10-25 17:51:33.744777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.545 [2024-10-25 17:51:33.744865] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:15.545 [2024-10-25 17:51:33.744886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.545 pt2 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.545 [2024-10-25 17:51:33.756198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.545 [2024-10-25 17:51:33.756243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.545 [2024-10-25 17:51:33.756265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:15.545 [2024-10-25 17:51:33.756275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.545 [2024-10-25 17:51:33.756615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.545 [2024-10-25 17:51:33.756630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.545 [2024-10-25 17:51:33.756691] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.545 [2024-10-25 17:51:33.756708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.545 pt3 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.545 [2024-10-25 17:51:33.768191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:15.545 [2024-10-25 17:51:33.768278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.545 [2024-10-25 17:51:33.768300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:15.545 [2024-10-25 17:51:33.768308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.545 [2024-10-25 17:51:33.768663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.545 [2024-10-25 17:51:33.768678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:15.545 [2024-10-25 17:51:33.768737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:15.545 [2024-10-25 17:51:33.768754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:15.545 [2024-10-25 17:51:33.768900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.545 [2024-10-25 17:51:33.768909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.545 [2024-10-25 17:51:33.769146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:15.545 [2024-10-25 17:51:33.769307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.545 [2024-10-25 17:51:33.769328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:15.545 [2024-10-25 17:51:33.769477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.545 pt4 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.545 "name": "raid_bdev1", 00:10:15.545 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:15.545 "strip_size_kb": 64, 00:10:15.545 "state": "online", 00:10:15.545 "raid_level": "raid0", 00:10:15.545 "superblock": true, 00:10:15.545 "num_base_bdevs": 4, 00:10:15.545 "num_base_bdevs_discovered": 4, 00:10:15.545 "num_base_bdevs_operational": 4, 00:10:15.545 "base_bdevs_list": [ 00:10:15.545 { 00:10:15.545 "name": "pt1", 00:10:15.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.545 "is_configured": true, 00:10:15.545 "data_offset": 2048, 00:10:15.545 "data_size": 63488 00:10:15.545 }, 00:10:15.545 { 00:10:15.545 "name": "pt2", 00:10:15.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.545 "is_configured": true, 00:10:15.545 "data_offset": 2048, 00:10:15.545 "data_size": 63488 00:10:15.545 }, 00:10:15.545 { 00:10:15.545 "name": "pt3", 00:10:15.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.545 "is_configured": true, 00:10:15.545 "data_offset": 2048, 00:10:15.545 "data_size": 63488 00:10:15.545 }, 00:10:15.545 { 00:10:15.545 "name": "pt4", 00:10:15.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.545 "is_configured": true, 00:10:15.545 "data_offset": 2048, 00:10:15.545 "data_size": 63488 00:10:15.545 } 00:10:15.545 ] 00:10:15.545 }' 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.545 17:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 [2024-10-25 17:51:34.179867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.805 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.805 "name": "raid_bdev1", 00:10:15.805 "aliases": [ 00:10:15.805 "da06102a-3e1b-4cd9-a71b-de211bfa1030" 00:10:15.805 ], 00:10:15.805 "product_name": "Raid Volume", 00:10:15.805 "block_size": 512, 00:10:15.805 "num_blocks": 253952, 00:10:15.805 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:15.805 "assigned_rate_limits": { 00:10:15.805 "rw_ios_per_sec": 0, 00:10:15.805 "rw_mbytes_per_sec": 0, 00:10:15.805 "r_mbytes_per_sec": 0, 00:10:15.805 "w_mbytes_per_sec": 0 00:10:15.805 }, 00:10:15.805 "claimed": false, 00:10:15.805 "zoned": false, 00:10:15.805 "supported_io_types": { 00:10:15.805 "read": true, 00:10:15.805 "write": true, 00:10:15.805 "unmap": true, 00:10:15.805 "flush": true, 00:10:15.805 "reset": true, 00:10:15.805 "nvme_admin": false, 00:10:15.805 "nvme_io": false, 00:10:15.805 "nvme_io_md": false, 00:10:15.805 "write_zeroes": true, 00:10:15.805 "zcopy": false, 00:10:15.805 "get_zone_info": false, 00:10:15.805 "zone_management": false, 00:10:15.805 "zone_append": false, 00:10:15.805 "compare": false, 00:10:15.805 "compare_and_write": false, 00:10:15.805 "abort": false, 00:10:15.805 "seek_hole": false, 00:10:15.805 "seek_data": false, 00:10:15.805 "copy": false, 00:10:15.805 "nvme_iov_md": false 00:10:15.805 }, 00:10:15.805 "memory_domains": [ 00:10:15.805 { 00:10:15.805 "dma_device_id": "system", 00:10:15.806 "dma_device_type": 1 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.806 "dma_device_type": 2 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "system", 00:10:15.806 "dma_device_type": 1 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.806 "dma_device_type": 2 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "system", 00:10:15.806 "dma_device_type": 1 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.806 "dma_device_type": 2 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "system", 00:10:15.806 "dma_device_type": 1 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.806 "dma_device_type": 2 00:10:15.806 } 00:10:15.806 ], 00:10:15.806 "driver_specific": { 00:10:15.806 "raid": { 00:10:15.806 "uuid": "da06102a-3e1b-4cd9-a71b-de211bfa1030", 00:10:15.806 "strip_size_kb": 64, 00:10:15.806 "state": "online", 00:10:15.806 "raid_level": "raid0", 00:10:15.806 "superblock": true, 00:10:15.806 "num_base_bdevs": 4, 00:10:15.806 "num_base_bdevs_discovered": 4, 00:10:15.806 "num_base_bdevs_operational": 4, 00:10:15.806 "base_bdevs_list": [ 00:10:15.806 { 00:10:15.806 "name": "pt1", 00:10:15.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.806 "is_configured": true, 00:10:15.806 "data_offset": 2048, 00:10:15.806 "data_size": 63488 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "name": "pt2", 00:10:15.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.806 "is_configured": true, 00:10:15.806 "data_offset": 2048, 00:10:15.806 "data_size": 63488 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "name": "pt3", 00:10:15.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.806 "is_configured": true, 00:10:15.806 "data_offset": 2048, 00:10:15.806 "data_size": 63488 00:10:15.806 }, 00:10:15.806 { 00:10:15.806 "name": "pt4", 00:10:15.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.806 "is_configured": true, 00:10:15.806 "data_offset": 2048, 00:10:15.806 "data_size": 63488 00:10:15.806 } 00:10:15.806 ] 00:10:15.806 } 00:10:15.806 } 00:10:15.806 }' 00:10:15.806 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.066 pt2 00:10:16.066 pt3 00:10:16.066 pt4' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.066 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:16.326 [2024-10-25 17:51:34.507236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' da06102a-3e1b-4cd9-a71b-de211bfa1030 '!=' da06102a-3e1b-4cd9-a71b-de211bfa1030 ']' 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70452 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70452 ']' 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70452 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70452 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70452' 00:10:16.326 killing process with pid 70452 00:10:16.326 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70452 00:10:16.326 [2024-10-25 17:51:34.592853] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.326 [2024-10-25 17:51:34.592989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.326 [2024-10-25 17:51:34.593097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 17:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70452 00:10:16.326 ee all in destruct 00:10:16.326 [2024-10-25 17:51:34.593150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:16.584 [2024-10-25 17:51:34.966494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.962 17:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:17.962 00:10:17.962 real 0m5.200s 00:10:17.962 user 0m7.382s 00:10:17.962 sys 0m0.943s 00:10:17.962 17:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.962 17:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 ************************************ 00:10:17.962 END TEST raid_superblock_test 00:10:17.962 ************************************ 00:10:17.962 17:51:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:17.962 17:51:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:17.962 17:51:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.962 17:51:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 ************************************ 00:10:17.962 START TEST raid_read_error_test 00:10:17.962 ************************************ 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:17.962 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oGXavN2BAd 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70711 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70711 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 70711 ']' 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.963 17:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.963 [2024-10-25 17:51:36.187152] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:17.963 [2024-10-25 17:51:36.187382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70711 ] 00:10:17.963 [2024-10-25 17:51:36.364285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.222 [2024-10-25 17:51:36.478128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.481 [2024-10-25 17:51:36.681185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.481 [2024-10-25 17:51:36.681236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 BaseBdev1_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 true 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 [2024-10-25 17:51:37.093179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:18.740 [2024-10-25 17:51:37.093234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.740 [2024-10-25 17:51:37.093254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:18.740 [2024-10-25 17:51:37.093265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.740 [2024-10-25 17:51:37.095435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.740 [2024-10-25 17:51:37.095479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:18.740 BaseBdev1 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 BaseBdev2_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 true 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.740 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 [2024-10-25 17:51:37.160581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:18.740 [2024-10-25 17:51:37.160644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.740 [2024-10-25 17:51:37.160662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:18.740 [2024-10-25 17:51:37.160672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.740 [2024-10-25 17:51:37.162673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.741 [2024-10-25 17:51:37.162780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:18.741 BaseBdev2 00:10:18.741 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.741 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.741 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:18.741 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.741 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 BaseBdev3_malloc 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 true 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 [2024-10-25 17:51:37.238600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:18.999 [2024-10-25 17:51:37.238656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.999 [2024-10-25 17:51:37.238675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:18.999 [2024-10-25 17:51:37.238687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.999 [2024-10-25 17:51:37.240868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.999 [2024-10-25 17:51:37.240906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:18.999 BaseBdev3 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 BaseBdev4_malloc 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 true 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.999 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.999 [2024-10-25 17:51:37.304214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.000 [2024-10-25 17:51:37.304267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.000 [2024-10-25 17:51:37.304284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.000 [2024-10-25 17:51:37.304296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.000 [2024-10-25 17:51:37.306292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.000 [2024-10-25 17:51:37.306390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.000 BaseBdev4 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.000 [2024-10-25 17:51:37.316265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.000 [2024-10-25 17:51:37.318082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.000 [2024-10-25 17:51:37.318154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.000 [2024-10-25 17:51:37.318217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.000 [2024-10-25 17:51:37.318430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:19.000 [2024-10-25 17:51:37.318444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.000 [2024-10-25 17:51:37.318668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:19.000 [2024-10-25 17:51:37.318834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:19.000 [2024-10-25 17:51:37.318856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:19.000 [2024-10-25 17:51:37.318994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.000 "name": "raid_bdev1", 00:10:19.000 "uuid": "90edaff4-aeb7-451e-b956-15480c1b3cbf", 00:10:19.000 "strip_size_kb": 64, 00:10:19.000 "state": "online", 00:10:19.000 "raid_level": "raid0", 00:10:19.000 "superblock": true, 00:10:19.000 "num_base_bdevs": 4, 00:10:19.000 "num_base_bdevs_discovered": 4, 00:10:19.000 "num_base_bdevs_operational": 4, 00:10:19.000 "base_bdevs_list": [ 00:10:19.000 { 00:10:19.000 "name": "BaseBdev1", 00:10:19.000 "uuid": "8f9cc3e1-411e-511c-91e3-b2c0058329a1", 00:10:19.000 "is_configured": true, 00:10:19.000 "data_offset": 2048, 00:10:19.000 "data_size": 63488 00:10:19.000 }, 00:10:19.000 { 00:10:19.000 "name": "BaseBdev2", 00:10:19.000 "uuid": "49e3dfe1-ef10-5191-9166-5d012bba2889", 00:10:19.000 "is_configured": true, 00:10:19.000 "data_offset": 2048, 00:10:19.000 "data_size": 63488 00:10:19.000 }, 00:10:19.000 { 00:10:19.000 "name": "BaseBdev3", 00:10:19.000 "uuid": "b3525968-2c47-5b4e-9f2b-3dd683be4fb7", 00:10:19.000 "is_configured": true, 00:10:19.000 "data_offset": 2048, 00:10:19.000 "data_size": 63488 00:10:19.000 }, 00:10:19.000 { 00:10:19.000 "name": "BaseBdev4", 00:10:19.000 "uuid": "a8d49003-f4bc-5838-bb1e-04e5c98da90a", 00:10:19.000 "is_configured": true, 00:10:19.000 "data_offset": 2048, 00:10:19.000 "data_size": 63488 00:10:19.000 } 00:10:19.000 ] 00:10:19.000 }' 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.000 17:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.566 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:19.566 17:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:19.566 [2024-10-25 17:51:37.836669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.504 "name": "raid_bdev1", 00:10:20.504 "uuid": "90edaff4-aeb7-451e-b956-15480c1b3cbf", 00:10:20.504 "strip_size_kb": 64, 00:10:20.504 "state": "online", 00:10:20.504 "raid_level": "raid0", 00:10:20.504 "superblock": true, 00:10:20.504 "num_base_bdevs": 4, 00:10:20.504 "num_base_bdevs_discovered": 4, 00:10:20.504 "num_base_bdevs_operational": 4, 00:10:20.504 "base_bdevs_list": [ 00:10:20.504 { 00:10:20.504 "name": "BaseBdev1", 00:10:20.504 "uuid": "8f9cc3e1-411e-511c-91e3-b2c0058329a1", 00:10:20.504 "is_configured": true, 00:10:20.504 "data_offset": 2048, 00:10:20.504 "data_size": 63488 00:10:20.504 }, 00:10:20.504 { 00:10:20.504 "name": "BaseBdev2", 00:10:20.504 "uuid": "49e3dfe1-ef10-5191-9166-5d012bba2889", 00:10:20.504 "is_configured": true, 00:10:20.504 "data_offset": 2048, 00:10:20.504 "data_size": 63488 00:10:20.504 }, 00:10:20.504 { 00:10:20.504 "name": "BaseBdev3", 00:10:20.504 "uuid": "b3525968-2c47-5b4e-9f2b-3dd683be4fb7", 00:10:20.504 "is_configured": true, 00:10:20.504 "data_offset": 2048, 00:10:20.504 "data_size": 63488 00:10:20.504 }, 00:10:20.504 { 00:10:20.504 "name": "BaseBdev4", 00:10:20.504 "uuid": "a8d49003-f4bc-5838-bb1e-04e5c98da90a", 00:10:20.504 "is_configured": true, 00:10:20.504 "data_offset": 2048, 00:10:20.504 "data_size": 63488 00:10:20.504 } 00:10:20.504 ] 00:10:20.504 }' 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.504 17:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.763 17:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.763 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.763 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 [2024-10-25 17:51:39.200300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.022 [2024-10-25 17:51:39.200333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.022 [2024-10-25 17:51:39.202943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.022 [2024-10-25 17:51:39.202999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.022 [2024-10-25 17:51:39.203041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.022 [2024-10-25 17:51:39.203053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:21.022 { 00:10:21.022 "results": [ 00:10:21.022 { 00:10:21.022 "job": "raid_bdev1", 00:10:21.022 "core_mask": "0x1", 00:10:21.022 "workload": "randrw", 00:10:21.022 "percentage": 50, 00:10:21.022 "status": "finished", 00:10:21.022 "queue_depth": 1, 00:10:21.022 "io_size": 131072, 00:10:21.022 "runtime": 1.364484, 00:10:21.022 "iops": 16606.27753788245, 00:10:21.022 "mibps": 2075.7846922353065, 00:10:21.022 "io_failed": 1, 00:10:21.022 "io_timeout": 0, 00:10:21.022 "avg_latency_us": 83.7908561341571, 00:10:21.022 "min_latency_us": 24.929257641921396, 00:10:21.022 "max_latency_us": 1395.1441048034935 00:10:21.022 } 00:10:21.022 ], 00:10:21.022 "core_count": 1 00:10:21.022 } 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70711 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 70711 ']' 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 70711 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70711 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.022 killing process with pid 70711 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.022 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70711' 00:10:21.023 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 70711 00:10:21.023 [2024-10-25 17:51:39.235236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.023 17:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 70711 00:10:21.287 [2024-10-25 17:51:39.552722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oGXavN2BAd 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:22.673 ************************************ 00:10:22.673 END TEST raid_read_error_test 00:10:22.673 ************************************ 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:22.673 00:10:22.673 real 0m4.605s 00:10:22.673 user 0m5.423s 00:10:22.673 sys 0m0.586s 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.673 17:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 17:51:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:22.673 17:51:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.673 17:51:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.673 17:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 ************************************ 00:10:22.673 START TEST raid_write_error_test 00:10:22.673 ************************************ 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y33ZMhx069 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70861 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70861 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 70861 ']' 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.673 17:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 [2024-10-25 17:51:40.879109] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:22.673 [2024-10-25 17:51:40.879344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:10:22.673 [2024-10-25 17:51:41.057022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.933 [2024-10-25 17:51:41.169084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.933 [2024-10-25 17:51:41.360150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.933 [2024-10-25 17:51:41.360242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 BaseBdev1_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 true 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 [2024-10-25 17:51:41.759217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.503 [2024-10-25 17:51:41.759274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.503 [2024-10-25 17:51:41.759294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:23.503 [2024-10-25 17:51:41.759304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.503 [2024-10-25 17:51:41.761368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.503 [2024-10-25 17:51:41.761409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.503 BaseBdev1 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 BaseBdev2_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 true 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 [2024-10-25 17:51:41.826637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.503 [2024-10-25 17:51:41.826695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.503 [2024-10-25 17:51:41.826712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:23.503 [2024-10-25 17:51:41.826722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.503 [2024-10-25 17:51:41.828942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.503 [2024-10-25 17:51:41.828985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.503 BaseBdev2 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 BaseBdev3_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 true 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.503 [2024-10-25 17:51:41.903529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.503 [2024-10-25 17:51:41.903603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.503 [2024-10-25 17:51:41.903624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:23.503 [2024-10-25 17:51:41.903636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.503 [2024-10-25 17:51:41.905800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.503 [2024-10-25 17:51:41.905901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.503 BaseBdev3 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.503 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.764 BaseBdev4_malloc 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.764 true 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.764 [2024-10-25 17:51:41.956986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:23.764 [2024-10-25 17:51:41.957083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.764 [2024-10-25 17:51:41.957104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:23.764 [2024-10-25 17:51:41.957115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.764 [2024-10-25 17:51:41.959169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.764 [2024-10-25 17:51:41.959207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:23.764 BaseBdev4 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.764 [2024-10-25 17:51:41.969023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.764 [2024-10-25 17:51:41.970779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.764 [2024-10-25 17:51:41.970860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.764 [2024-10-25 17:51:41.970927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.764 [2024-10-25 17:51:41.971132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:23.764 [2024-10-25 17:51:41.971148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:23.764 [2024-10-25 17:51:41.971373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:23.764 [2024-10-25 17:51:41.971518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:23.764 [2024-10-25 17:51:41.971528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:23.764 [2024-10-25 17:51:41.971696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.764 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.765 17:51:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.765 17:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.765 17:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.765 "name": "raid_bdev1", 00:10:23.765 "uuid": "feab4c58-678b-4a84-a7f6-b65fbd4f6b59", 00:10:23.765 "strip_size_kb": 64, 00:10:23.765 "state": "online", 00:10:23.765 "raid_level": "raid0", 00:10:23.765 "superblock": true, 00:10:23.765 "num_base_bdevs": 4, 00:10:23.765 "num_base_bdevs_discovered": 4, 00:10:23.765 "num_base_bdevs_operational": 4, 00:10:23.765 "base_bdevs_list": [ 00:10:23.765 { 00:10:23.765 "name": "BaseBdev1", 00:10:23.765 "uuid": "7dcccc51-0dc1-501d-a9bb-42975fe55f5e", 00:10:23.765 "is_configured": true, 00:10:23.765 "data_offset": 2048, 00:10:23.765 "data_size": 63488 00:10:23.765 }, 00:10:23.765 { 00:10:23.765 "name": "BaseBdev2", 00:10:23.765 "uuid": "96ce1397-a5dc-5b79-8cb5-6c9cc2260021", 00:10:23.765 "is_configured": true, 00:10:23.765 "data_offset": 2048, 00:10:23.765 "data_size": 63488 00:10:23.765 }, 00:10:23.765 { 00:10:23.765 "name": "BaseBdev3", 00:10:23.765 "uuid": "1fb7ce30-bbf9-5c23-8cb8-41a4b73d1d73", 00:10:23.765 "is_configured": true, 00:10:23.765 "data_offset": 2048, 00:10:23.765 "data_size": 63488 00:10:23.765 }, 00:10:23.765 { 00:10:23.765 "name": "BaseBdev4", 00:10:23.765 "uuid": "b74a4137-f98a-556c-8833-51c5828b23ee", 00:10:23.765 "is_configured": true, 00:10:23.765 "data_offset": 2048, 00:10:23.765 "data_size": 63488 00:10:23.765 } 00:10:23.765 ] 00:10:23.765 }' 00:10:23.765 17:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.765 17:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.025 17:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.025 17:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.285 [2024-10-25 17:51:42.578042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.224 "name": "raid_bdev1", 00:10:25.224 "uuid": "feab4c58-678b-4a84-a7f6-b65fbd4f6b59", 00:10:25.224 "strip_size_kb": 64, 00:10:25.224 "state": "online", 00:10:25.224 "raid_level": "raid0", 00:10:25.224 "superblock": true, 00:10:25.224 "num_base_bdevs": 4, 00:10:25.224 "num_base_bdevs_discovered": 4, 00:10:25.224 "num_base_bdevs_operational": 4, 00:10:25.224 "base_bdevs_list": [ 00:10:25.224 { 00:10:25.224 "name": "BaseBdev1", 00:10:25.224 "uuid": "7dcccc51-0dc1-501d-a9bb-42975fe55f5e", 00:10:25.224 "is_configured": true, 00:10:25.224 "data_offset": 2048, 00:10:25.224 "data_size": 63488 00:10:25.224 }, 00:10:25.224 { 00:10:25.224 "name": "BaseBdev2", 00:10:25.224 "uuid": "96ce1397-a5dc-5b79-8cb5-6c9cc2260021", 00:10:25.224 "is_configured": true, 00:10:25.224 "data_offset": 2048, 00:10:25.224 "data_size": 63488 00:10:25.224 }, 00:10:25.224 { 00:10:25.224 "name": "BaseBdev3", 00:10:25.224 "uuid": "1fb7ce30-bbf9-5c23-8cb8-41a4b73d1d73", 00:10:25.224 "is_configured": true, 00:10:25.224 "data_offset": 2048, 00:10:25.224 "data_size": 63488 00:10:25.224 }, 00:10:25.224 { 00:10:25.224 "name": "BaseBdev4", 00:10:25.224 "uuid": "b74a4137-f98a-556c-8833-51c5828b23ee", 00:10:25.224 "is_configured": true, 00:10:25.224 "data_offset": 2048, 00:10:25.224 "data_size": 63488 00:10:25.224 } 00:10:25.224 ] 00:10:25.224 }' 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.224 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.485 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.485 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.485 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.485 [2024-10-25 17:51:43.888150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.485 [2024-10-25 17:51:43.888257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.485 [2024-10-25 17:51:43.890957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.485 [2024-10-25 17:51:43.891013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.485 [2024-10-25 17:51:43.891056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.485 [2024-10-25 17:51:43.891067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:25.485 { 00:10:25.485 "results": [ 00:10:25.485 { 00:10:25.485 "job": "raid_bdev1", 00:10:25.485 "core_mask": "0x1", 00:10:25.485 "workload": "randrw", 00:10:25.485 "percentage": 50, 00:10:25.485 "status": "finished", 00:10:25.485 "queue_depth": 1, 00:10:25.485 "io_size": 131072, 00:10:25.486 "runtime": 1.310101, 00:10:25.486 "iops": 15772.066428466202, 00:10:25.486 "mibps": 1971.5083035582752, 00:10:25.486 "io_failed": 1, 00:10:25.486 "io_timeout": 0, 00:10:25.486 "avg_latency_us": 88.26649760695985, 00:10:25.486 "min_latency_us": 26.1589519650655, 00:10:25.486 "max_latency_us": 1366.5257641921398 00:10:25.486 } 00:10:25.486 ], 00:10:25.486 "core_count": 1 00:10:25.486 } 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70861 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 70861 ']' 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 70861 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.486 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70861 00:10:25.749 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.749 killing process with pid 70861 00:10:25.749 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.749 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70861' 00:10:25.749 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 70861 00:10:25.749 [2024-10-25 17:51:43.936801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.749 17:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 70861 00:10:26.012 [2024-10-25 17:51:44.254341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.957 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y33ZMhx069 00:10:26.957 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.957 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.217 ************************************ 00:10:27.217 END TEST raid_write_error_test 00:10:27.217 ************************************ 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:27.217 00:10:27.217 real 0m4.637s 00:10:27.217 user 0m5.467s 00:10:27.217 sys 0m0.621s 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.217 17:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.217 17:51:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:27.217 17:51:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:27.217 17:51:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:27.217 17:51:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.217 17:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.217 ************************************ 00:10:27.217 START TEST raid_state_function_test 00:10:27.217 ************************************ 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71006 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71006' 00:10:27.217 Process raid pid: 71006 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71006 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71006 ']' 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.217 17:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.217 [2024-10-25 17:51:45.568104] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:27.217 [2024-10-25 17:51:45.568316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.478 [2024-10-25 17:51:45.742310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.478 [2024-10-25 17:51:45.859372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.737 [2024-10-25 17:51:46.057807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.737 [2024-10-25 17:51:46.057946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.997 [2024-10-25 17:51:46.403701] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.997 [2024-10-25 17:51:46.403798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.997 [2024-10-25 17:51:46.403838] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.997 [2024-10-25 17:51:46.403865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.997 [2024-10-25 17:51:46.403885] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.997 [2024-10-25 17:51:46.403907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.997 [2024-10-25 17:51:46.403925] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.997 [2024-10-25 17:51:46.403946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.997 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.998 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.257 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.257 "name": "Existed_Raid", 00:10:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.258 "strip_size_kb": 64, 00:10:28.258 "state": "configuring", 00:10:28.258 "raid_level": "concat", 00:10:28.258 "superblock": false, 00:10:28.258 "num_base_bdevs": 4, 00:10:28.258 "num_base_bdevs_discovered": 0, 00:10:28.258 "num_base_bdevs_operational": 4, 00:10:28.258 "base_bdevs_list": [ 00:10:28.258 { 00:10:28.258 "name": "BaseBdev1", 00:10:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.258 "is_configured": false, 00:10:28.258 "data_offset": 0, 00:10:28.258 "data_size": 0 00:10:28.258 }, 00:10:28.258 { 00:10:28.258 "name": "BaseBdev2", 00:10:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.258 "is_configured": false, 00:10:28.258 "data_offset": 0, 00:10:28.258 "data_size": 0 00:10:28.258 }, 00:10:28.258 { 00:10:28.258 "name": "BaseBdev3", 00:10:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.258 "is_configured": false, 00:10:28.258 "data_offset": 0, 00:10:28.258 "data_size": 0 00:10:28.258 }, 00:10:28.258 { 00:10:28.258 "name": "BaseBdev4", 00:10:28.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.258 "is_configured": false, 00:10:28.258 "data_offset": 0, 00:10:28.258 "data_size": 0 00:10:28.258 } 00:10:28.258 ] 00:10:28.258 }' 00:10:28.258 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.258 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.518 [2024-10-25 17:51:46.910774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.518 [2024-10-25 17:51:46.910889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.518 [2024-10-25 17:51:46.922731] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.518 [2024-10-25 17:51:46.922811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.518 [2024-10-25 17:51:46.922854] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.518 [2024-10-25 17:51:46.922877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.518 [2024-10-25 17:51:46.922895] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.518 [2024-10-25 17:51:46.922916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.518 [2024-10-25 17:51:46.922933] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:28.518 [2024-10-25 17:51:46.922953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.518 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.519 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.519 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.779 [2024-10-25 17:51:46.968992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.779 BaseBdev1 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.779 17:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.779 [ 00:10:28.779 { 00:10:28.779 "name": "BaseBdev1", 00:10:28.779 "aliases": [ 00:10:28.779 "39936826-d0a4-43b6-8607-c66dd32a7f05" 00:10:28.779 ], 00:10:28.779 "product_name": "Malloc disk", 00:10:28.779 "block_size": 512, 00:10:28.779 "num_blocks": 65536, 00:10:28.779 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:28.779 "assigned_rate_limits": { 00:10:28.779 "rw_ios_per_sec": 0, 00:10:28.779 "rw_mbytes_per_sec": 0, 00:10:28.779 "r_mbytes_per_sec": 0, 00:10:28.779 "w_mbytes_per_sec": 0 00:10:28.779 }, 00:10:28.779 "claimed": true, 00:10:28.779 "claim_type": "exclusive_write", 00:10:28.779 "zoned": false, 00:10:28.779 "supported_io_types": { 00:10:28.779 "read": true, 00:10:28.779 "write": true, 00:10:28.779 "unmap": true, 00:10:28.779 "flush": true, 00:10:28.779 "reset": true, 00:10:28.779 "nvme_admin": false, 00:10:28.779 "nvme_io": false, 00:10:28.779 "nvme_io_md": false, 00:10:28.779 "write_zeroes": true, 00:10:28.779 "zcopy": true, 00:10:28.779 "get_zone_info": false, 00:10:28.779 "zone_management": false, 00:10:28.779 "zone_append": false, 00:10:28.779 "compare": false, 00:10:28.779 "compare_and_write": false, 00:10:28.779 "abort": true, 00:10:28.779 "seek_hole": false, 00:10:28.779 "seek_data": false, 00:10:28.779 "copy": true, 00:10:28.779 "nvme_iov_md": false 00:10:28.779 }, 00:10:28.779 "memory_domains": [ 00:10:28.779 { 00:10:28.779 "dma_device_id": "system", 00:10:28.779 "dma_device_type": 1 00:10:28.779 }, 00:10:28.780 { 00:10:28.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.780 "dma_device_type": 2 00:10:28.780 } 00:10:28.780 ], 00:10:28.780 "driver_specific": {} 00:10:28.780 } 00:10:28.780 ] 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.780 "name": "Existed_Raid", 00:10:28.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.780 "strip_size_kb": 64, 00:10:28.780 "state": "configuring", 00:10:28.780 "raid_level": "concat", 00:10:28.780 "superblock": false, 00:10:28.780 "num_base_bdevs": 4, 00:10:28.780 "num_base_bdevs_discovered": 1, 00:10:28.780 "num_base_bdevs_operational": 4, 00:10:28.780 "base_bdevs_list": [ 00:10:28.780 { 00:10:28.780 "name": "BaseBdev1", 00:10:28.780 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:28.780 "is_configured": true, 00:10:28.780 "data_offset": 0, 00:10:28.780 "data_size": 65536 00:10:28.780 }, 00:10:28.780 { 00:10:28.780 "name": "BaseBdev2", 00:10:28.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.780 "is_configured": false, 00:10:28.780 "data_offset": 0, 00:10:28.780 "data_size": 0 00:10:28.780 }, 00:10:28.780 { 00:10:28.780 "name": "BaseBdev3", 00:10:28.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.780 "is_configured": false, 00:10:28.780 "data_offset": 0, 00:10:28.780 "data_size": 0 00:10:28.780 }, 00:10:28.780 { 00:10:28.780 "name": "BaseBdev4", 00:10:28.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.780 "is_configured": false, 00:10:28.780 "data_offset": 0, 00:10:28.780 "data_size": 0 00:10:28.780 } 00:10:28.780 ] 00:10:28.780 }' 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.780 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 [2024-10-25 17:51:47.448260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.040 [2024-10-25 17:51:47.448375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.040 [2024-10-25 17:51:47.456296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.040 [2024-10-25 17:51:47.458255] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.040 [2024-10-25 17:51:47.458297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.040 [2024-10-25 17:51:47.458307] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.040 [2024-10-25 17:51:47.458318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.040 [2024-10-25 17:51:47.458324] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.040 [2024-10-25 17:51:47.458333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.040 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.300 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.300 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.300 "name": "Existed_Raid", 00:10:29.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.300 "strip_size_kb": 64, 00:10:29.300 "state": "configuring", 00:10:29.300 "raid_level": "concat", 00:10:29.300 "superblock": false, 00:10:29.300 "num_base_bdevs": 4, 00:10:29.300 "num_base_bdevs_discovered": 1, 00:10:29.300 "num_base_bdevs_operational": 4, 00:10:29.300 "base_bdevs_list": [ 00:10:29.300 { 00:10:29.300 "name": "BaseBdev1", 00:10:29.300 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:29.300 "is_configured": true, 00:10:29.300 "data_offset": 0, 00:10:29.300 "data_size": 65536 00:10:29.300 }, 00:10:29.300 { 00:10:29.300 "name": "BaseBdev2", 00:10:29.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.300 "is_configured": false, 00:10:29.300 "data_offset": 0, 00:10:29.300 "data_size": 0 00:10:29.300 }, 00:10:29.300 { 00:10:29.300 "name": "BaseBdev3", 00:10:29.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.300 "is_configured": false, 00:10:29.300 "data_offset": 0, 00:10:29.300 "data_size": 0 00:10:29.300 }, 00:10:29.300 { 00:10:29.300 "name": "BaseBdev4", 00:10:29.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.300 "is_configured": false, 00:10:29.300 "data_offset": 0, 00:10:29.300 "data_size": 0 00:10:29.300 } 00:10:29.300 ] 00:10:29.300 }' 00:10:29.300 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.300 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.560 [2024-10-25 17:51:47.916302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.560 BaseBdev2 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.560 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.560 [ 00:10:29.560 { 00:10:29.560 "name": "BaseBdev2", 00:10:29.561 "aliases": [ 00:10:29.561 "15a56461-2962-4809-85c4-c0c9f348a2ab" 00:10:29.561 ], 00:10:29.561 "product_name": "Malloc disk", 00:10:29.561 "block_size": 512, 00:10:29.561 "num_blocks": 65536, 00:10:29.561 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:29.561 "assigned_rate_limits": { 00:10:29.561 "rw_ios_per_sec": 0, 00:10:29.561 "rw_mbytes_per_sec": 0, 00:10:29.561 "r_mbytes_per_sec": 0, 00:10:29.561 "w_mbytes_per_sec": 0 00:10:29.561 }, 00:10:29.561 "claimed": true, 00:10:29.561 "claim_type": "exclusive_write", 00:10:29.561 "zoned": false, 00:10:29.561 "supported_io_types": { 00:10:29.561 "read": true, 00:10:29.561 "write": true, 00:10:29.561 "unmap": true, 00:10:29.561 "flush": true, 00:10:29.561 "reset": true, 00:10:29.561 "nvme_admin": false, 00:10:29.561 "nvme_io": false, 00:10:29.561 "nvme_io_md": false, 00:10:29.561 "write_zeroes": true, 00:10:29.561 "zcopy": true, 00:10:29.561 "get_zone_info": false, 00:10:29.561 "zone_management": false, 00:10:29.561 "zone_append": false, 00:10:29.561 "compare": false, 00:10:29.561 "compare_and_write": false, 00:10:29.561 "abort": true, 00:10:29.561 "seek_hole": false, 00:10:29.561 "seek_data": false, 00:10:29.561 "copy": true, 00:10:29.561 "nvme_iov_md": false 00:10:29.561 }, 00:10:29.561 "memory_domains": [ 00:10:29.561 { 00:10:29.561 "dma_device_id": "system", 00:10:29.561 "dma_device_type": 1 00:10:29.561 }, 00:10:29.561 { 00:10:29.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.561 "dma_device_type": 2 00:10:29.561 } 00:10:29.561 ], 00:10:29.561 "driver_specific": {} 00:10:29.561 } 00:10:29.561 ] 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.561 17:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.820 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.820 "name": "Existed_Raid", 00:10:29.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.820 "strip_size_kb": 64, 00:10:29.820 "state": "configuring", 00:10:29.820 "raid_level": "concat", 00:10:29.820 "superblock": false, 00:10:29.820 "num_base_bdevs": 4, 00:10:29.820 "num_base_bdevs_discovered": 2, 00:10:29.820 "num_base_bdevs_operational": 4, 00:10:29.820 "base_bdevs_list": [ 00:10:29.820 { 00:10:29.820 "name": "BaseBdev1", 00:10:29.820 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:29.820 "is_configured": true, 00:10:29.820 "data_offset": 0, 00:10:29.820 "data_size": 65536 00:10:29.820 }, 00:10:29.820 { 00:10:29.820 "name": "BaseBdev2", 00:10:29.820 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:29.820 "is_configured": true, 00:10:29.820 "data_offset": 0, 00:10:29.820 "data_size": 65536 00:10:29.820 }, 00:10:29.820 { 00:10:29.820 "name": "BaseBdev3", 00:10:29.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.820 "is_configured": false, 00:10:29.820 "data_offset": 0, 00:10:29.820 "data_size": 0 00:10:29.820 }, 00:10:29.820 { 00:10:29.820 "name": "BaseBdev4", 00:10:29.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.820 "is_configured": false, 00:10:29.820 "data_offset": 0, 00:10:29.820 "data_size": 0 00:10:29.820 } 00:10:29.820 ] 00:10:29.820 }' 00:10:29.820 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.820 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 [2024-10-25 17:51:48.418221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.080 BaseBdev3 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.080 [ 00:10:30.080 { 00:10:30.080 "name": "BaseBdev3", 00:10:30.080 "aliases": [ 00:10:30.080 "8f2dd077-f264-4ad4-8034-8869a27d4019" 00:10:30.080 ], 00:10:30.080 "product_name": "Malloc disk", 00:10:30.080 "block_size": 512, 00:10:30.080 "num_blocks": 65536, 00:10:30.080 "uuid": "8f2dd077-f264-4ad4-8034-8869a27d4019", 00:10:30.080 "assigned_rate_limits": { 00:10:30.080 "rw_ios_per_sec": 0, 00:10:30.080 "rw_mbytes_per_sec": 0, 00:10:30.080 "r_mbytes_per_sec": 0, 00:10:30.080 "w_mbytes_per_sec": 0 00:10:30.080 }, 00:10:30.080 "claimed": true, 00:10:30.080 "claim_type": "exclusive_write", 00:10:30.080 "zoned": false, 00:10:30.080 "supported_io_types": { 00:10:30.080 "read": true, 00:10:30.080 "write": true, 00:10:30.080 "unmap": true, 00:10:30.080 "flush": true, 00:10:30.080 "reset": true, 00:10:30.080 "nvme_admin": false, 00:10:30.080 "nvme_io": false, 00:10:30.080 "nvme_io_md": false, 00:10:30.080 "write_zeroes": true, 00:10:30.080 "zcopy": true, 00:10:30.080 "get_zone_info": false, 00:10:30.080 "zone_management": false, 00:10:30.080 "zone_append": false, 00:10:30.080 "compare": false, 00:10:30.080 "compare_and_write": false, 00:10:30.080 "abort": true, 00:10:30.080 "seek_hole": false, 00:10:30.080 "seek_data": false, 00:10:30.080 "copy": true, 00:10:30.080 "nvme_iov_md": false 00:10:30.080 }, 00:10:30.080 "memory_domains": [ 00:10:30.080 { 00:10:30.080 "dma_device_id": "system", 00:10:30.080 "dma_device_type": 1 00:10:30.080 }, 00:10:30.080 { 00:10:30.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.080 "dma_device_type": 2 00:10:30.080 } 00:10:30.080 ], 00:10:30.080 "driver_specific": {} 00:10:30.080 } 00:10:30.080 ] 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.080 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.081 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.081 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.081 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.339 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.339 "name": "Existed_Raid", 00:10:30.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.339 "strip_size_kb": 64, 00:10:30.339 "state": "configuring", 00:10:30.339 "raid_level": "concat", 00:10:30.339 "superblock": false, 00:10:30.339 "num_base_bdevs": 4, 00:10:30.339 "num_base_bdevs_discovered": 3, 00:10:30.339 "num_base_bdevs_operational": 4, 00:10:30.339 "base_bdevs_list": [ 00:10:30.339 { 00:10:30.339 "name": "BaseBdev1", 00:10:30.339 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:30.339 "is_configured": true, 00:10:30.339 "data_offset": 0, 00:10:30.339 "data_size": 65536 00:10:30.339 }, 00:10:30.339 { 00:10:30.339 "name": "BaseBdev2", 00:10:30.339 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:30.339 "is_configured": true, 00:10:30.339 "data_offset": 0, 00:10:30.339 "data_size": 65536 00:10:30.339 }, 00:10:30.339 { 00:10:30.339 "name": "BaseBdev3", 00:10:30.339 "uuid": "8f2dd077-f264-4ad4-8034-8869a27d4019", 00:10:30.339 "is_configured": true, 00:10:30.339 "data_offset": 0, 00:10:30.339 "data_size": 65536 00:10:30.339 }, 00:10:30.339 { 00:10:30.339 "name": "BaseBdev4", 00:10:30.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.339 "is_configured": false, 00:10:30.339 "data_offset": 0, 00:10:30.339 "data_size": 0 00:10:30.339 } 00:10:30.339 ] 00:10:30.339 }' 00:10:30.339 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.339 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.599 [2024-10-25 17:51:48.987512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.599 [2024-10-25 17:51:48.987636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:30.599 [2024-10-25 17:51:48.987662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:30.599 [2024-10-25 17:51:48.988031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:30.599 [2024-10-25 17:51:48.988263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:30.599 [2024-10-25 17:51:48.988317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:30.599 [2024-10-25 17:51:48.988659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.599 BaseBdev4 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.599 17:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.599 [ 00:10:30.599 { 00:10:30.599 "name": "BaseBdev4", 00:10:30.599 "aliases": [ 00:10:30.599 "5baf4735-41cd-467f-945d-bd16d279b47f" 00:10:30.599 ], 00:10:30.599 "product_name": "Malloc disk", 00:10:30.599 "block_size": 512, 00:10:30.599 "num_blocks": 65536, 00:10:30.599 "uuid": "5baf4735-41cd-467f-945d-bd16d279b47f", 00:10:30.599 "assigned_rate_limits": { 00:10:30.599 "rw_ios_per_sec": 0, 00:10:30.599 "rw_mbytes_per_sec": 0, 00:10:30.599 "r_mbytes_per_sec": 0, 00:10:30.599 "w_mbytes_per_sec": 0 00:10:30.599 }, 00:10:30.599 "claimed": true, 00:10:30.599 "claim_type": "exclusive_write", 00:10:30.599 "zoned": false, 00:10:30.599 "supported_io_types": { 00:10:30.599 "read": true, 00:10:30.599 "write": true, 00:10:30.599 "unmap": true, 00:10:30.599 "flush": true, 00:10:30.599 "reset": true, 00:10:30.599 "nvme_admin": false, 00:10:30.599 "nvme_io": false, 00:10:30.599 "nvme_io_md": false, 00:10:30.599 "write_zeroes": true, 00:10:30.599 "zcopy": true, 00:10:30.599 "get_zone_info": false, 00:10:30.599 "zone_management": false, 00:10:30.599 "zone_append": false, 00:10:30.599 "compare": false, 00:10:30.599 "compare_and_write": false, 00:10:30.599 "abort": true, 00:10:30.599 "seek_hole": false, 00:10:30.599 "seek_data": false, 00:10:30.599 "copy": true, 00:10:30.599 "nvme_iov_md": false 00:10:30.599 }, 00:10:30.599 "memory_domains": [ 00:10:30.599 { 00:10:30.599 "dma_device_id": "system", 00:10:30.599 "dma_device_type": 1 00:10:30.599 }, 00:10:30.599 { 00:10:30.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.599 "dma_device_type": 2 00:10:30.599 } 00:10:30.599 ], 00:10:30.599 "driver_specific": {} 00:10:30.599 } 00:10:30.599 ] 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.599 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.859 "name": "Existed_Raid", 00:10:30.859 "uuid": "f5c2b0d1-8b3e-47f5-83ec-feebfb5d9b14", 00:10:30.859 "strip_size_kb": 64, 00:10:30.859 "state": "online", 00:10:30.859 "raid_level": "concat", 00:10:30.859 "superblock": false, 00:10:30.859 "num_base_bdevs": 4, 00:10:30.859 "num_base_bdevs_discovered": 4, 00:10:30.859 "num_base_bdevs_operational": 4, 00:10:30.859 "base_bdevs_list": [ 00:10:30.859 { 00:10:30.859 "name": "BaseBdev1", 00:10:30.859 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:30.859 "is_configured": true, 00:10:30.859 "data_offset": 0, 00:10:30.859 "data_size": 65536 00:10:30.859 }, 00:10:30.859 { 00:10:30.859 "name": "BaseBdev2", 00:10:30.859 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:30.859 "is_configured": true, 00:10:30.859 "data_offset": 0, 00:10:30.859 "data_size": 65536 00:10:30.859 }, 00:10:30.859 { 00:10:30.859 "name": "BaseBdev3", 00:10:30.859 "uuid": "8f2dd077-f264-4ad4-8034-8869a27d4019", 00:10:30.859 "is_configured": true, 00:10:30.859 "data_offset": 0, 00:10:30.859 "data_size": 65536 00:10:30.859 }, 00:10:30.859 { 00:10:30.859 "name": "BaseBdev4", 00:10:30.859 "uuid": "5baf4735-41cd-467f-945d-bd16d279b47f", 00:10:30.859 "is_configured": true, 00:10:30.859 "data_offset": 0, 00:10:30.859 "data_size": 65536 00:10:30.859 } 00:10:30.859 ] 00:10:30.859 }' 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.859 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.117 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.117 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.117 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.118 [2024-10-25 17:51:49.487126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.118 "name": "Existed_Raid", 00:10:31.118 "aliases": [ 00:10:31.118 "f5c2b0d1-8b3e-47f5-83ec-feebfb5d9b14" 00:10:31.118 ], 00:10:31.118 "product_name": "Raid Volume", 00:10:31.118 "block_size": 512, 00:10:31.118 "num_blocks": 262144, 00:10:31.118 "uuid": "f5c2b0d1-8b3e-47f5-83ec-feebfb5d9b14", 00:10:31.118 "assigned_rate_limits": { 00:10:31.118 "rw_ios_per_sec": 0, 00:10:31.118 "rw_mbytes_per_sec": 0, 00:10:31.118 "r_mbytes_per_sec": 0, 00:10:31.118 "w_mbytes_per_sec": 0 00:10:31.118 }, 00:10:31.118 "claimed": false, 00:10:31.118 "zoned": false, 00:10:31.118 "supported_io_types": { 00:10:31.118 "read": true, 00:10:31.118 "write": true, 00:10:31.118 "unmap": true, 00:10:31.118 "flush": true, 00:10:31.118 "reset": true, 00:10:31.118 "nvme_admin": false, 00:10:31.118 "nvme_io": false, 00:10:31.118 "nvme_io_md": false, 00:10:31.118 "write_zeroes": true, 00:10:31.118 "zcopy": false, 00:10:31.118 "get_zone_info": false, 00:10:31.118 "zone_management": false, 00:10:31.118 "zone_append": false, 00:10:31.118 "compare": false, 00:10:31.118 "compare_and_write": false, 00:10:31.118 "abort": false, 00:10:31.118 "seek_hole": false, 00:10:31.118 "seek_data": false, 00:10:31.118 "copy": false, 00:10:31.118 "nvme_iov_md": false 00:10:31.118 }, 00:10:31.118 "memory_domains": [ 00:10:31.118 { 00:10:31.118 "dma_device_id": "system", 00:10:31.118 "dma_device_type": 1 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.118 "dma_device_type": 2 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "system", 00:10:31.118 "dma_device_type": 1 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.118 "dma_device_type": 2 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "system", 00:10:31.118 "dma_device_type": 1 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.118 "dma_device_type": 2 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "system", 00:10:31.118 "dma_device_type": 1 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.118 "dma_device_type": 2 00:10:31.118 } 00:10:31.118 ], 00:10:31.118 "driver_specific": { 00:10:31.118 "raid": { 00:10:31.118 "uuid": "f5c2b0d1-8b3e-47f5-83ec-feebfb5d9b14", 00:10:31.118 "strip_size_kb": 64, 00:10:31.118 "state": "online", 00:10:31.118 "raid_level": "concat", 00:10:31.118 "superblock": false, 00:10:31.118 "num_base_bdevs": 4, 00:10:31.118 "num_base_bdevs_discovered": 4, 00:10:31.118 "num_base_bdevs_operational": 4, 00:10:31.118 "base_bdevs_list": [ 00:10:31.118 { 00:10:31.118 "name": "BaseBdev1", 00:10:31.118 "uuid": "39936826-d0a4-43b6-8607-c66dd32a7f05", 00:10:31.118 "is_configured": true, 00:10:31.118 "data_offset": 0, 00:10:31.118 "data_size": 65536 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "name": "BaseBdev2", 00:10:31.118 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:31.118 "is_configured": true, 00:10:31.118 "data_offset": 0, 00:10:31.118 "data_size": 65536 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "name": "BaseBdev3", 00:10:31.118 "uuid": "8f2dd077-f264-4ad4-8034-8869a27d4019", 00:10:31.118 "is_configured": true, 00:10:31.118 "data_offset": 0, 00:10:31.118 "data_size": 65536 00:10:31.118 }, 00:10:31.118 { 00:10:31.118 "name": "BaseBdev4", 00:10:31.118 "uuid": "5baf4735-41cd-467f-945d-bd16d279b47f", 00:10:31.118 "is_configured": true, 00:10:31.118 "data_offset": 0, 00:10:31.118 "data_size": 65536 00:10:31.118 } 00:10:31.118 ] 00:10:31.118 } 00:10:31.118 } 00:10:31.118 }' 00:10:31.118 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:31.398 BaseBdev2 00:10:31.398 BaseBdev3 00:10:31.398 BaseBdev4' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.398 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.657 [2024-10-25 17:51:49.838179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.657 [2024-10-25 17:51:49.838210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.657 [2024-10-25 17:51:49.838261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.657 "name": "Existed_Raid", 00:10:31.657 "uuid": "f5c2b0d1-8b3e-47f5-83ec-feebfb5d9b14", 00:10:31.657 "strip_size_kb": 64, 00:10:31.657 "state": "offline", 00:10:31.657 "raid_level": "concat", 00:10:31.657 "superblock": false, 00:10:31.657 "num_base_bdevs": 4, 00:10:31.657 "num_base_bdevs_discovered": 3, 00:10:31.657 "num_base_bdevs_operational": 3, 00:10:31.657 "base_bdevs_list": [ 00:10:31.657 { 00:10:31.657 "name": null, 00:10:31.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.657 "is_configured": false, 00:10:31.657 "data_offset": 0, 00:10:31.657 "data_size": 65536 00:10:31.657 }, 00:10:31.657 { 00:10:31.657 "name": "BaseBdev2", 00:10:31.657 "uuid": "15a56461-2962-4809-85c4-c0c9f348a2ab", 00:10:31.657 "is_configured": true, 00:10:31.657 "data_offset": 0, 00:10:31.657 "data_size": 65536 00:10:31.657 }, 00:10:31.657 { 00:10:31.657 "name": "BaseBdev3", 00:10:31.657 "uuid": "8f2dd077-f264-4ad4-8034-8869a27d4019", 00:10:31.657 "is_configured": true, 00:10:31.657 "data_offset": 0, 00:10:31.657 "data_size": 65536 00:10:31.657 }, 00:10:31.657 { 00:10:31.657 "name": "BaseBdev4", 00:10:31.657 "uuid": "5baf4735-41cd-467f-945d-bd16d279b47f", 00:10:31.657 "is_configured": true, 00:10:31.657 "data_offset": 0, 00:10:31.657 "data_size": 65536 00:10:31.657 } 00:10:31.657 ] 00:10:31.657 }' 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.657 17:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.223 [2024-10-25 17:51:50.464112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.223 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.223 [2024-10-25 17:51:50.622349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.481 [2024-10-25 17:51:50.783344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:32.481 [2024-10-25 17:51:50.783459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.481 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 BaseBdev2 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.741 17:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 [ 00:10:32.741 { 00:10:32.741 "name": "BaseBdev2", 00:10:32.741 "aliases": [ 00:10:32.741 "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed" 00:10:32.741 ], 00:10:32.741 "product_name": "Malloc disk", 00:10:32.741 "block_size": 512, 00:10:32.741 "num_blocks": 65536, 00:10:32.741 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:32.741 "assigned_rate_limits": { 00:10:32.741 "rw_ios_per_sec": 0, 00:10:32.741 "rw_mbytes_per_sec": 0, 00:10:32.741 "r_mbytes_per_sec": 0, 00:10:32.741 "w_mbytes_per_sec": 0 00:10:32.741 }, 00:10:32.741 "claimed": false, 00:10:32.741 "zoned": false, 00:10:32.741 "supported_io_types": { 00:10:32.741 "read": true, 00:10:32.741 "write": true, 00:10:32.741 "unmap": true, 00:10:32.741 "flush": true, 00:10:32.741 "reset": true, 00:10:32.741 "nvme_admin": false, 00:10:32.741 "nvme_io": false, 00:10:32.741 "nvme_io_md": false, 00:10:32.741 "write_zeroes": true, 00:10:32.741 "zcopy": true, 00:10:32.741 "get_zone_info": false, 00:10:32.741 "zone_management": false, 00:10:32.741 "zone_append": false, 00:10:32.741 "compare": false, 00:10:32.741 "compare_and_write": false, 00:10:32.741 "abort": true, 00:10:32.741 "seek_hole": false, 00:10:32.741 "seek_data": false, 00:10:32.741 "copy": true, 00:10:32.741 "nvme_iov_md": false 00:10:32.741 }, 00:10:32.741 "memory_domains": [ 00:10:32.741 { 00:10:32.741 "dma_device_id": "system", 00:10:32.741 "dma_device_type": 1 00:10:32.741 }, 00:10:32.741 { 00:10:32.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.741 "dma_device_type": 2 00:10:32.741 } 00:10:32.741 ], 00:10:32.741 "driver_specific": {} 00:10:32.741 } 00:10:32.741 ] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 BaseBdev3 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 [ 00:10:32.741 { 00:10:32.741 "name": "BaseBdev3", 00:10:32.741 "aliases": [ 00:10:32.741 "1945d52a-5ed4-4c79-a476-c659aa51cd85" 00:10:32.741 ], 00:10:32.741 "product_name": "Malloc disk", 00:10:32.741 "block_size": 512, 00:10:32.741 "num_blocks": 65536, 00:10:32.741 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:32.741 "assigned_rate_limits": { 00:10:32.741 "rw_ios_per_sec": 0, 00:10:32.741 "rw_mbytes_per_sec": 0, 00:10:32.741 "r_mbytes_per_sec": 0, 00:10:32.741 "w_mbytes_per_sec": 0 00:10:32.741 }, 00:10:32.741 "claimed": false, 00:10:32.741 "zoned": false, 00:10:32.741 "supported_io_types": { 00:10:32.741 "read": true, 00:10:32.741 "write": true, 00:10:32.741 "unmap": true, 00:10:32.741 "flush": true, 00:10:32.741 "reset": true, 00:10:32.741 "nvme_admin": false, 00:10:32.741 "nvme_io": false, 00:10:32.741 "nvme_io_md": false, 00:10:32.741 "write_zeroes": true, 00:10:32.741 "zcopy": true, 00:10:32.741 "get_zone_info": false, 00:10:32.741 "zone_management": false, 00:10:32.741 "zone_append": false, 00:10:32.741 "compare": false, 00:10:32.741 "compare_and_write": false, 00:10:32.741 "abort": true, 00:10:32.741 "seek_hole": false, 00:10:32.741 "seek_data": false, 00:10:32.741 "copy": true, 00:10:32.741 "nvme_iov_md": false 00:10:32.741 }, 00:10:32.741 "memory_domains": [ 00:10:32.741 { 00:10:32.741 "dma_device_id": "system", 00:10:32.741 "dma_device_type": 1 00:10:32.741 }, 00:10:32.741 { 00:10:32.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.741 "dma_device_type": 2 00:10:32.741 } 00:10:32.741 ], 00:10:32.741 "driver_specific": {} 00:10:32.741 } 00:10:32.741 ] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 BaseBdev4 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.741 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.001 [ 00:10:33.001 { 00:10:33.001 "name": "BaseBdev4", 00:10:33.001 "aliases": [ 00:10:33.001 "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a" 00:10:33.001 ], 00:10:33.001 "product_name": "Malloc disk", 00:10:33.001 "block_size": 512, 00:10:33.001 "num_blocks": 65536, 00:10:33.001 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:33.001 "assigned_rate_limits": { 00:10:33.001 "rw_ios_per_sec": 0, 00:10:33.001 "rw_mbytes_per_sec": 0, 00:10:33.001 "r_mbytes_per_sec": 0, 00:10:33.001 "w_mbytes_per_sec": 0 00:10:33.001 }, 00:10:33.001 "claimed": false, 00:10:33.001 "zoned": false, 00:10:33.001 "supported_io_types": { 00:10:33.001 "read": true, 00:10:33.001 "write": true, 00:10:33.001 "unmap": true, 00:10:33.001 "flush": true, 00:10:33.001 "reset": true, 00:10:33.001 "nvme_admin": false, 00:10:33.001 "nvme_io": false, 00:10:33.001 "nvme_io_md": false, 00:10:33.001 "write_zeroes": true, 00:10:33.001 "zcopy": true, 00:10:33.001 "get_zone_info": false, 00:10:33.001 "zone_management": false, 00:10:33.001 "zone_append": false, 00:10:33.001 "compare": false, 00:10:33.001 "compare_and_write": false, 00:10:33.001 "abort": true, 00:10:33.001 "seek_hole": false, 00:10:33.001 "seek_data": false, 00:10:33.001 "copy": true, 00:10:33.001 "nvme_iov_md": false 00:10:33.001 }, 00:10:33.001 "memory_domains": [ 00:10:33.001 { 00:10:33.001 "dma_device_id": "system", 00:10:33.001 "dma_device_type": 1 00:10:33.001 }, 00:10:33.001 { 00:10:33.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.002 "dma_device_type": 2 00:10:33.002 } 00:10:33.002 ], 00:10:33.002 "driver_specific": {} 00:10:33.002 } 00:10:33.002 ] 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.002 [2024-10-25 17:51:51.199983] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.002 [2024-10-25 17:51:51.200029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.002 [2024-10-25 17:51:51.200054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.002 [2024-10-25 17:51:51.202094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.002 [2024-10-25 17:51:51.202158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.002 "name": "Existed_Raid", 00:10:33.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.002 "strip_size_kb": 64, 00:10:33.002 "state": "configuring", 00:10:33.002 "raid_level": "concat", 00:10:33.002 "superblock": false, 00:10:33.002 "num_base_bdevs": 4, 00:10:33.002 "num_base_bdevs_discovered": 3, 00:10:33.002 "num_base_bdevs_operational": 4, 00:10:33.002 "base_bdevs_list": [ 00:10:33.002 { 00:10:33.002 "name": "BaseBdev1", 00:10:33.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.002 "is_configured": false, 00:10:33.002 "data_offset": 0, 00:10:33.002 "data_size": 0 00:10:33.002 }, 00:10:33.002 { 00:10:33.002 "name": "BaseBdev2", 00:10:33.002 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:33.002 "is_configured": true, 00:10:33.002 "data_offset": 0, 00:10:33.002 "data_size": 65536 00:10:33.002 }, 00:10:33.002 { 00:10:33.002 "name": "BaseBdev3", 00:10:33.002 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:33.002 "is_configured": true, 00:10:33.002 "data_offset": 0, 00:10:33.002 "data_size": 65536 00:10:33.002 }, 00:10:33.002 { 00:10:33.002 "name": "BaseBdev4", 00:10:33.002 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:33.002 "is_configured": true, 00:10:33.002 "data_offset": 0, 00:10:33.002 "data_size": 65536 00:10:33.002 } 00:10:33.002 ] 00:10:33.002 }' 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.002 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.261 [2024-10-25 17:51:51.615255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.261 "name": "Existed_Raid", 00:10:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.261 "strip_size_kb": 64, 00:10:33.261 "state": "configuring", 00:10:33.261 "raid_level": "concat", 00:10:33.261 "superblock": false, 00:10:33.261 "num_base_bdevs": 4, 00:10:33.261 "num_base_bdevs_discovered": 2, 00:10:33.261 "num_base_bdevs_operational": 4, 00:10:33.261 "base_bdevs_list": [ 00:10:33.261 { 00:10:33.261 "name": "BaseBdev1", 00:10:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.261 "is_configured": false, 00:10:33.261 "data_offset": 0, 00:10:33.261 "data_size": 0 00:10:33.261 }, 00:10:33.261 { 00:10:33.261 "name": null, 00:10:33.261 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:33.261 "is_configured": false, 00:10:33.261 "data_offset": 0, 00:10:33.261 "data_size": 65536 00:10:33.261 }, 00:10:33.261 { 00:10:33.261 "name": "BaseBdev3", 00:10:33.261 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:33.261 "is_configured": true, 00:10:33.261 "data_offset": 0, 00:10:33.261 "data_size": 65536 00:10:33.261 }, 00:10:33.261 { 00:10:33.261 "name": "BaseBdev4", 00:10:33.261 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:33.261 "is_configured": true, 00:10:33.261 "data_offset": 0, 00:10:33.261 "data_size": 65536 00:10:33.261 } 00:10:33.261 ] 00:10:33.261 }' 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.261 17:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 [2024-10-25 17:51:52.128954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.831 BaseBdev1 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 [ 00:10:33.831 { 00:10:33.831 "name": "BaseBdev1", 00:10:33.831 "aliases": [ 00:10:33.831 "e0f5901d-f53d-4fea-ae6b-572655c8df10" 00:10:33.831 ], 00:10:33.831 "product_name": "Malloc disk", 00:10:33.831 "block_size": 512, 00:10:33.831 "num_blocks": 65536, 00:10:33.831 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:33.831 "assigned_rate_limits": { 00:10:33.831 "rw_ios_per_sec": 0, 00:10:33.831 "rw_mbytes_per_sec": 0, 00:10:33.831 "r_mbytes_per_sec": 0, 00:10:33.831 "w_mbytes_per_sec": 0 00:10:33.831 }, 00:10:33.831 "claimed": true, 00:10:33.831 "claim_type": "exclusive_write", 00:10:33.831 "zoned": false, 00:10:33.831 "supported_io_types": { 00:10:33.831 "read": true, 00:10:33.831 "write": true, 00:10:33.831 "unmap": true, 00:10:33.831 "flush": true, 00:10:33.831 "reset": true, 00:10:33.831 "nvme_admin": false, 00:10:33.831 "nvme_io": false, 00:10:33.831 "nvme_io_md": false, 00:10:33.831 "write_zeroes": true, 00:10:33.831 "zcopy": true, 00:10:33.831 "get_zone_info": false, 00:10:33.831 "zone_management": false, 00:10:33.831 "zone_append": false, 00:10:33.831 "compare": false, 00:10:33.831 "compare_and_write": false, 00:10:33.831 "abort": true, 00:10:33.831 "seek_hole": false, 00:10:33.831 "seek_data": false, 00:10:33.831 "copy": true, 00:10:33.831 "nvme_iov_md": false 00:10:33.831 }, 00:10:33.831 "memory_domains": [ 00:10:33.831 { 00:10:33.831 "dma_device_id": "system", 00:10:33.831 "dma_device_type": 1 00:10:33.831 }, 00:10:33.831 { 00:10:33.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.831 "dma_device_type": 2 00:10:33.831 } 00:10:33.831 ], 00:10:33.831 "driver_specific": {} 00:10:33.831 } 00:10:33.831 ] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.831 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.832 "name": "Existed_Raid", 00:10:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.832 "strip_size_kb": 64, 00:10:33.832 "state": "configuring", 00:10:33.832 "raid_level": "concat", 00:10:33.832 "superblock": false, 00:10:33.832 "num_base_bdevs": 4, 00:10:33.832 "num_base_bdevs_discovered": 3, 00:10:33.832 "num_base_bdevs_operational": 4, 00:10:33.832 "base_bdevs_list": [ 00:10:33.832 { 00:10:33.832 "name": "BaseBdev1", 00:10:33.832 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:33.832 "is_configured": true, 00:10:33.832 "data_offset": 0, 00:10:33.832 "data_size": 65536 00:10:33.832 }, 00:10:33.832 { 00:10:33.832 "name": null, 00:10:33.832 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:33.832 "is_configured": false, 00:10:33.832 "data_offset": 0, 00:10:33.832 "data_size": 65536 00:10:33.832 }, 00:10:33.832 { 00:10:33.832 "name": "BaseBdev3", 00:10:33.832 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:33.832 "is_configured": true, 00:10:33.832 "data_offset": 0, 00:10:33.832 "data_size": 65536 00:10:33.832 }, 00:10:33.832 { 00:10:33.832 "name": "BaseBdev4", 00:10:33.832 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:33.832 "is_configured": true, 00:10:33.832 "data_offset": 0, 00:10:33.832 "data_size": 65536 00:10:33.832 } 00:10:33.832 ] 00:10:33.832 }' 00:10:33.832 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.832 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.401 [2024-10-25 17:51:52.668258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.401 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.401 "name": "Existed_Raid", 00:10:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.401 "strip_size_kb": 64, 00:10:34.401 "state": "configuring", 00:10:34.401 "raid_level": "concat", 00:10:34.401 "superblock": false, 00:10:34.401 "num_base_bdevs": 4, 00:10:34.401 "num_base_bdevs_discovered": 2, 00:10:34.401 "num_base_bdevs_operational": 4, 00:10:34.401 "base_bdevs_list": [ 00:10:34.401 { 00:10:34.401 "name": "BaseBdev1", 00:10:34.401 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:34.401 "is_configured": true, 00:10:34.401 "data_offset": 0, 00:10:34.402 "data_size": 65536 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": null, 00:10:34.402 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:34.402 "is_configured": false, 00:10:34.402 "data_offset": 0, 00:10:34.402 "data_size": 65536 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": null, 00:10:34.402 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:34.402 "is_configured": false, 00:10:34.402 "data_offset": 0, 00:10:34.402 "data_size": 65536 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": "BaseBdev4", 00:10:34.402 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:34.402 "is_configured": true, 00:10:34.402 "data_offset": 0, 00:10:34.402 "data_size": 65536 00:10:34.402 } 00:10:34.402 ] 00:10:34.402 }' 00:10:34.402 17:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.402 17:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.971 [2024-10-25 17:51:53.164077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.971 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.971 "name": "Existed_Raid", 00:10:34.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.971 "strip_size_kb": 64, 00:10:34.971 "state": "configuring", 00:10:34.971 "raid_level": "concat", 00:10:34.971 "superblock": false, 00:10:34.971 "num_base_bdevs": 4, 00:10:34.971 "num_base_bdevs_discovered": 3, 00:10:34.971 "num_base_bdevs_operational": 4, 00:10:34.971 "base_bdevs_list": [ 00:10:34.972 { 00:10:34.972 "name": "BaseBdev1", 00:10:34.972 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:34.972 "is_configured": true, 00:10:34.972 "data_offset": 0, 00:10:34.972 "data_size": 65536 00:10:34.972 }, 00:10:34.972 { 00:10:34.972 "name": null, 00:10:34.972 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:34.972 "is_configured": false, 00:10:34.972 "data_offset": 0, 00:10:34.972 "data_size": 65536 00:10:34.972 }, 00:10:34.972 { 00:10:34.972 "name": "BaseBdev3", 00:10:34.972 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:34.972 "is_configured": true, 00:10:34.972 "data_offset": 0, 00:10:34.972 "data_size": 65536 00:10:34.972 }, 00:10:34.972 { 00:10:34.972 "name": "BaseBdev4", 00:10:34.972 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:34.972 "is_configured": true, 00:10:34.972 "data_offset": 0, 00:10:34.972 "data_size": 65536 00:10:34.972 } 00:10:34.972 ] 00:10:34.972 }' 00:10:34.972 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.972 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.232 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 [2024-10-25 17:51:53.599378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.492 "name": "Existed_Raid", 00:10:35.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.492 "strip_size_kb": 64, 00:10:35.492 "state": "configuring", 00:10:35.492 "raid_level": "concat", 00:10:35.492 "superblock": false, 00:10:35.492 "num_base_bdevs": 4, 00:10:35.492 "num_base_bdevs_discovered": 2, 00:10:35.492 "num_base_bdevs_operational": 4, 00:10:35.492 "base_bdevs_list": [ 00:10:35.492 { 00:10:35.492 "name": null, 00:10:35.492 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:35.492 "is_configured": false, 00:10:35.492 "data_offset": 0, 00:10:35.492 "data_size": 65536 00:10:35.492 }, 00:10:35.492 { 00:10:35.492 "name": null, 00:10:35.492 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:35.492 "is_configured": false, 00:10:35.492 "data_offset": 0, 00:10:35.492 "data_size": 65536 00:10:35.492 }, 00:10:35.492 { 00:10:35.492 "name": "BaseBdev3", 00:10:35.492 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:35.492 "is_configured": true, 00:10:35.492 "data_offset": 0, 00:10:35.492 "data_size": 65536 00:10:35.492 }, 00:10:35.492 { 00:10:35.492 "name": "BaseBdev4", 00:10:35.492 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:35.492 "is_configured": true, 00:10:35.492 "data_offset": 0, 00:10:35.492 "data_size": 65536 00:10:35.492 } 00:10:35.492 ] 00:10:35.492 }' 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.492 17:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.752 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 [2024-10-25 17:51:54.138563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.753 "name": "Existed_Raid", 00:10:35.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.753 "strip_size_kb": 64, 00:10:35.753 "state": "configuring", 00:10:35.753 "raid_level": "concat", 00:10:35.753 "superblock": false, 00:10:35.753 "num_base_bdevs": 4, 00:10:35.753 "num_base_bdevs_discovered": 3, 00:10:35.753 "num_base_bdevs_operational": 4, 00:10:35.753 "base_bdevs_list": [ 00:10:35.753 { 00:10:35.753 "name": null, 00:10:35.753 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:35.753 "is_configured": false, 00:10:35.753 "data_offset": 0, 00:10:35.753 "data_size": 65536 00:10:35.753 }, 00:10:35.753 { 00:10:35.753 "name": "BaseBdev2", 00:10:35.753 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:35.753 "is_configured": true, 00:10:35.753 "data_offset": 0, 00:10:35.753 "data_size": 65536 00:10:35.753 }, 00:10:35.753 { 00:10:35.753 "name": "BaseBdev3", 00:10:35.753 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:35.753 "is_configured": true, 00:10:35.753 "data_offset": 0, 00:10:35.753 "data_size": 65536 00:10:35.753 }, 00:10:35.753 { 00:10:35.753 "name": "BaseBdev4", 00:10:35.753 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:35.753 "is_configured": true, 00:10:35.753 "data_offset": 0, 00:10:35.753 "data_size": 65536 00:10:35.753 } 00:10:35.753 ] 00:10:35.753 }' 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.753 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0f5901d-f53d-4fea-ae6b-572655c8df10 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 [2024-10-25 17:51:54.673347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:36.324 [2024-10-25 17:51:54.673406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:36.324 [2024-10-25 17:51:54.673413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:36.324 [2024-10-25 17:51:54.673656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:36.324 [2024-10-25 17:51:54.673819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:36.324 [2024-10-25 17:51:54.673873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:36.324 [2024-10-25 17:51:54.674102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.324 NewBaseBdev 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 [ 00:10:36.324 { 00:10:36.324 "name": "NewBaseBdev", 00:10:36.324 "aliases": [ 00:10:36.324 "e0f5901d-f53d-4fea-ae6b-572655c8df10" 00:10:36.324 ], 00:10:36.324 "product_name": "Malloc disk", 00:10:36.324 "block_size": 512, 00:10:36.324 "num_blocks": 65536, 00:10:36.324 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:36.324 "assigned_rate_limits": { 00:10:36.324 "rw_ios_per_sec": 0, 00:10:36.324 "rw_mbytes_per_sec": 0, 00:10:36.324 "r_mbytes_per_sec": 0, 00:10:36.324 "w_mbytes_per_sec": 0 00:10:36.324 }, 00:10:36.324 "claimed": true, 00:10:36.324 "claim_type": "exclusive_write", 00:10:36.324 "zoned": false, 00:10:36.324 "supported_io_types": { 00:10:36.324 "read": true, 00:10:36.324 "write": true, 00:10:36.324 "unmap": true, 00:10:36.324 "flush": true, 00:10:36.324 "reset": true, 00:10:36.324 "nvme_admin": false, 00:10:36.324 "nvme_io": false, 00:10:36.324 "nvme_io_md": false, 00:10:36.324 "write_zeroes": true, 00:10:36.324 "zcopy": true, 00:10:36.324 "get_zone_info": false, 00:10:36.324 "zone_management": false, 00:10:36.324 "zone_append": false, 00:10:36.324 "compare": false, 00:10:36.324 "compare_and_write": false, 00:10:36.324 "abort": true, 00:10:36.324 "seek_hole": false, 00:10:36.324 "seek_data": false, 00:10:36.324 "copy": true, 00:10:36.324 "nvme_iov_md": false 00:10:36.324 }, 00:10:36.324 "memory_domains": [ 00:10:36.324 { 00:10:36.324 "dma_device_id": "system", 00:10:36.324 "dma_device_type": 1 00:10:36.324 }, 00:10:36.324 { 00:10:36.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.324 "dma_device_type": 2 00:10:36.324 } 00:10:36.324 ], 00:10:36.324 "driver_specific": {} 00:10:36.324 } 00:10:36.324 ] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.324 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.584 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.584 "name": "Existed_Raid", 00:10:36.584 "uuid": "2190c532-1395-4568-9796-65a96dcd60e6", 00:10:36.584 "strip_size_kb": 64, 00:10:36.584 "state": "online", 00:10:36.584 "raid_level": "concat", 00:10:36.584 "superblock": false, 00:10:36.584 "num_base_bdevs": 4, 00:10:36.584 "num_base_bdevs_discovered": 4, 00:10:36.584 "num_base_bdevs_operational": 4, 00:10:36.584 "base_bdevs_list": [ 00:10:36.584 { 00:10:36.584 "name": "NewBaseBdev", 00:10:36.584 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:36.584 "is_configured": true, 00:10:36.584 "data_offset": 0, 00:10:36.584 "data_size": 65536 00:10:36.584 }, 00:10:36.584 { 00:10:36.584 "name": "BaseBdev2", 00:10:36.584 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:36.584 "is_configured": true, 00:10:36.584 "data_offset": 0, 00:10:36.584 "data_size": 65536 00:10:36.584 }, 00:10:36.584 { 00:10:36.584 "name": "BaseBdev3", 00:10:36.584 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:36.584 "is_configured": true, 00:10:36.584 "data_offset": 0, 00:10:36.584 "data_size": 65536 00:10:36.584 }, 00:10:36.584 { 00:10:36.584 "name": "BaseBdev4", 00:10:36.584 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:36.584 "is_configured": true, 00:10:36.584 "data_offset": 0, 00:10:36.584 "data_size": 65536 00:10:36.584 } 00:10:36.584 ] 00:10:36.584 }' 00:10:36.584 17:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.584 17:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.844 [2024-10-25 17:51:55.121028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.844 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.844 "name": "Existed_Raid", 00:10:36.844 "aliases": [ 00:10:36.844 "2190c532-1395-4568-9796-65a96dcd60e6" 00:10:36.844 ], 00:10:36.844 "product_name": "Raid Volume", 00:10:36.844 "block_size": 512, 00:10:36.844 "num_blocks": 262144, 00:10:36.844 "uuid": "2190c532-1395-4568-9796-65a96dcd60e6", 00:10:36.844 "assigned_rate_limits": { 00:10:36.844 "rw_ios_per_sec": 0, 00:10:36.844 "rw_mbytes_per_sec": 0, 00:10:36.844 "r_mbytes_per_sec": 0, 00:10:36.844 "w_mbytes_per_sec": 0 00:10:36.844 }, 00:10:36.844 "claimed": false, 00:10:36.844 "zoned": false, 00:10:36.844 "supported_io_types": { 00:10:36.844 "read": true, 00:10:36.844 "write": true, 00:10:36.844 "unmap": true, 00:10:36.844 "flush": true, 00:10:36.844 "reset": true, 00:10:36.844 "nvme_admin": false, 00:10:36.844 "nvme_io": false, 00:10:36.844 "nvme_io_md": false, 00:10:36.844 "write_zeroes": true, 00:10:36.844 "zcopy": false, 00:10:36.844 "get_zone_info": false, 00:10:36.844 "zone_management": false, 00:10:36.844 "zone_append": false, 00:10:36.844 "compare": false, 00:10:36.844 "compare_and_write": false, 00:10:36.844 "abort": false, 00:10:36.844 "seek_hole": false, 00:10:36.844 "seek_data": false, 00:10:36.844 "copy": false, 00:10:36.844 "nvme_iov_md": false 00:10:36.844 }, 00:10:36.844 "memory_domains": [ 00:10:36.844 { 00:10:36.844 "dma_device_id": "system", 00:10:36.844 "dma_device_type": 1 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.844 "dma_device_type": 2 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "system", 00:10:36.844 "dma_device_type": 1 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.844 "dma_device_type": 2 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "system", 00:10:36.844 "dma_device_type": 1 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.844 "dma_device_type": 2 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "system", 00:10:36.844 "dma_device_type": 1 00:10:36.844 }, 00:10:36.844 { 00:10:36.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.844 "dma_device_type": 2 00:10:36.844 } 00:10:36.844 ], 00:10:36.844 "driver_specific": { 00:10:36.844 "raid": { 00:10:36.844 "uuid": "2190c532-1395-4568-9796-65a96dcd60e6", 00:10:36.844 "strip_size_kb": 64, 00:10:36.844 "state": "online", 00:10:36.844 "raid_level": "concat", 00:10:36.844 "superblock": false, 00:10:36.844 "num_base_bdevs": 4, 00:10:36.844 "num_base_bdevs_discovered": 4, 00:10:36.844 "num_base_bdevs_operational": 4, 00:10:36.844 "base_bdevs_list": [ 00:10:36.844 { 00:10:36.844 "name": "NewBaseBdev", 00:10:36.844 "uuid": "e0f5901d-f53d-4fea-ae6b-572655c8df10", 00:10:36.844 "is_configured": true, 00:10:36.845 "data_offset": 0, 00:10:36.845 "data_size": 65536 00:10:36.845 }, 00:10:36.845 { 00:10:36.845 "name": "BaseBdev2", 00:10:36.845 "uuid": "2604e8d4-7f2a-463d-a90f-cfbafc87c3ed", 00:10:36.845 "is_configured": true, 00:10:36.845 "data_offset": 0, 00:10:36.845 "data_size": 65536 00:10:36.845 }, 00:10:36.845 { 00:10:36.845 "name": "BaseBdev3", 00:10:36.845 "uuid": "1945d52a-5ed4-4c79-a476-c659aa51cd85", 00:10:36.845 "is_configured": true, 00:10:36.845 "data_offset": 0, 00:10:36.845 "data_size": 65536 00:10:36.845 }, 00:10:36.845 { 00:10:36.845 "name": "BaseBdev4", 00:10:36.845 "uuid": "df00cb24-d98b-42ca-9f7d-d7a064ffcd3a", 00:10:36.845 "is_configured": true, 00:10:36.845 "data_offset": 0, 00:10:36.845 "data_size": 65536 00:10:36.845 } 00:10:36.845 ] 00:10:36.845 } 00:10:36.845 } 00:10:36.845 }' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:36.845 BaseBdev2 00:10:36.845 BaseBdev3 00:10:36.845 BaseBdev4' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.845 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.105 [2024-10-25 17:51:55.352219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.105 [2024-10-25 17:51:55.352253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.105 [2024-10-25 17:51:55.352330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.105 [2024-10-25 17:51:55.352403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.105 [2024-10-25 17:51:55.352420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71006 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71006 ']' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71006 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71006 00:10:37.105 killing process with pid 71006 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71006' 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71006 00:10:37.105 [2024-10-25 17:51:55.398645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.105 17:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71006 00:10:37.365 [2024-10-25 17:51:55.790861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:38.745 00:10:38.745 real 0m11.440s 00:10:38.745 user 0m18.111s 00:10:38.745 sys 0m2.025s 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.745 ************************************ 00:10:38.745 END TEST raid_state_function_test 00:10:38.745 ************************************ 00:10:38.745 17:51:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:38.745 17:51:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:38.745 17:51:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.745 17:51:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.745 ************************************ 00:10:38.745 START TEST raid_state_function_test_sb 00:10:38.745 ************************************ 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.745 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71672 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.746 Process raid pid: 71672 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71672' 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71672 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71672 ']' 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.746 17:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.746 [2024-10-25 17:51:57.077549] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:38.746 [2024-10-25 17:51:57.077667] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.005 [2024-10-25 17:51:57.235459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.005 [2024-10-25 17:51:57.349686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.265 [2024-10-25 17:51:57.551553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.265 [2024-10-25 17:51:57.551595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.525 [2024-10-25 17:51:57.910132] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.525 [2024-10-25 17:51:57.910195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.525 [2024-10-25 17:51:57.910205] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.525 [2024-10-25 17:51:57.910214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.525 [2024-10-25 17:51:57.910225] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.525 [2024-10-25 17:51:57.910234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.525 [2024-10-25 17:51:57.910240] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.525 [2024-10-25 17:51:57.910248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.525 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.821 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.821 "name": "Existed_Raid", 00:10:39.821 "uuid": "b809267c-c3bc-4902-b4f6-06fac9b5fda6", 00:10:39.821 "strip_size_kb": 64, 00:10:39.821 "state": "configuring", 00:10:39.821 "raid_level": "concat", 00:10:39.821 "superblock": true, 00:10:39.821 "num_base_bdevs": 4, 00:10:39.821 "num_base_bdevs_discovered": 0, 00:10:39.821 "num_base_bdevs_operational": 4, 00:10:39.821 "base_bdevs_list": [ 00:10:39.821 { 00:10:39.821 "name": "BaseBdev1", 00:10:39.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.821 "is_configured": false, 00:10:39.821 "data_offset": 0, 00:10:39.821 "data_size": 0 00:10:39.821 }, 00:10:39.821 { 00:10:39.821 "name": "BaseBdev2", 00:10:39.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.821 "is_configured": false, 00:10:39.821 "data_offset": 0, 00:10:39.821 "data_size": 0 00:10:39.821 }, 00:10:39.821 { 00:10:39.821 "name": "BaseBdev3", 00:10:39.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.821 "is_configured": false, 00:10:39.821 "data_offset": 0, 00:10:39.821 "data_size": 0 00:10:39.821 }, 00:10:39.821 { 00:10:39.821 "name": "BaseBdev4", 00:10:39.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.821 "is_configured": false, 00:10:39.821 "data_offset": 0, 00:10:39.821 "data_size": 0 00:10:39.821 } 00:10:39.821 ] 00:10:39.821 }' 00:10:39.822 17:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.822 17:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 [2024-10-25 17:51:58.369280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.083 [2024-10-25 17:51:58.369322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 [2024-10-25 17:51:58.381259] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.083 [2024-10-25 17:51:58.381302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.083 [2024-10-25 17:51:58.381312] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.083 [2024-10-25 17:51:58.381321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.083 [2024-10-25 17:51:58.381327] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.083 [2024-10-25 17:51:58.381336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.083 [2024-10-25 17:51:58.381343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.083 [2024-10-25 17:51:58.381351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 [2024-10-25 17:51:58.428720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.083 BaseBdev1 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 [ 00:10:40.083 { 00:10:40.083 "name": "BaseBdev1", 00:10:40.083 "aliases": [ 00:10:40.083 "657297f0-d087-4d03-8fc6-f3480fb66e3e" 00:10:40.083 ], 00:10:40.083 "product_name": "Malloc disk", 00:10:40.083 "block_size": 512, 00:10:40.083 "num_blocks": 65536, 00:10:40.083 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:40.083 "assigned_rate_limits": { 00:10:40.083 "rw_ios_per_sec": 0, 00:10:40.083 "rw_mbytes_per_sec": 0, 00:10:40.083 "r_mbytes_per_sec": 0, 00:10:40.083 "w_mbytes_per_sec": 0 00:10:40.083 }, 00:10:40.083 "claimed": true, 00:10:40.083 "claim_type": "exclusive_write", 00:10:40.083 "zoned": false, 00:10:40.083 "supported_io_types": { 00:10:40.083 "read": true, 00:10:40.083 "write": true, 00:10:40.083 "unmap": true, 00:10:40.083 "flush": true, 00:10:40.083 "reset": true, 00:10:40.083 "nvme_admin": false, 00:10:40.083 "nvme_io": false, 00:10:40.083 "nvme_io_md": false, 00:10:40.083 "write_zeroes": true, 00:10:40.083 "zcopy": true, 00:10:40.083 "get_zone_info": false, 00:10:40.083 "zone_management": false, 00:10:40.083 "zone_append": false, 00:10:40.083 "compare": false, 00:10:40.083 "compare_and_write": false, 00:10:40.083 "abort": true, 00:10:40.083 "seek_hole": false, 00:10:40.083 "seek_data": false, 00:10:40.083 "copy": true, 00:10:40.083 "nvme_iov_md": false 00:10:40.083 }, 00:10:40.083 "memory_domains": [ 00:10:40.083 { 00:10:40.083 "dma_device_id": "system", 00:10:40.083 "dma_device_type": 1 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.083 "dma_device_type": 2 00:10:40.083 } 00:10:40.083 ], 00:10:40.083 "driver_specific": {} 00:10:40.083 } 00:10:40.083 ] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.083 "name": "Existed_Raid", 00:10:40.083 "uuid": "eef5b264-557a-4620-8928-7746f97f6669", 00:10:40.083 "strip_size_kb": 64, 00:10:40.083 "state": "configuring", 00:10:40.083 "raid_level": "concat", 00:10:40.083 "superblock": true, 00:10:40.083 "num_base_bdevs": 4, 00:10:40.083 "num_base_bdevs_discovered": 1, 00:10:40.083 "num_base_bdevs_operational": 4, 00:10:40.083 "base_bdevs_list": [ 00:10:40.083 { 00:10:40.083 "name": "BaseBdev1", 00:10:40.083 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:40.083 "is_configured": true, 00:10:40.083 "data_offset": 2048, 00:10:40.083 "data_size": 63488 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "BaseBdev2", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.083 "is_configured": false, 00:10:40.083 "data_offset": 0, 00:10:40.083 "data_size": 0 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "BaseBdev3", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.083 "is_configured": false, 00:10:40.083 "data_offset": 0, 00:10:40.083 "data_size": 0 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "BaseBdev4", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.083 "is_configured": false, 00:10:40.083 "data_offset": 0, 00:10:40.083 "data_size": 0 00:10:40.083 } 00:10:40.083 ] 00:10:40.083 }' 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.083 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.653 [2024-10-25 17:51:58.940166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.653 [2024-10-25 17:51:58.940227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.653 [2024-10-25 17:51:58.948234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.653 [2024-10-25 17:51:58.950269] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.653 [2024-10-25 17:51:58.950315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.653 [2024-10-25 17:51:58.950326] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.653 [2024-10-25 17:51:58.950338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.653 [2024-10-25 17:51:58.950346] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.653 [2024-10-25 17:51:58.950355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.653 17:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.653 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.653 "name": "Existed_Raid", 00:10:40.653 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:40.653 "strip_size_kb": 64, 00:10:40.653 "state": "configuring", 00:10:40.653 "raid_level": "concat", 00:10:40.653 "superblock": true, 00:10:40.653 "num_base_bdevs": 4, 00:10:40.653 "num_base_bdevs_discovered": 1, 00:10:40.653 "num_base_bdevs_operational": 4, 00:10:40.653 "base_bdevs_list": [ 00:10:40.653 { 00:10:40.653 "name": "BaseBdev1", 00:10:40.653 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:40.653 "is_configured": true, 00:10:40.653 "data_offset": 2048, 00:10:40.653 "data_size": 63488 00:10:40.653 }, 00:10:40.653 { 00:10:40.653 "name": "BaseBdev2", 00:10:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.653 "is_configured": false, 00:10:40.653 "data_offset": 0, 00:10:40.653 "data_size": 0 00:10:40.653 }, 00:10:40.653 { 00:10:40.653 "name": "BaseBdev3", 00:10:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.653 "is_configured": false, 00:10:40.653 "data_offset": 0, 00:10:40.653 "data_size": 0 00:10:40.653 }, 00:10:40.653 { 00:10:40.653 "name": "BaseBdev4", 00:10:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.653 "is_configured": false, 00:10:40.653 "data_offset": 0, 00:10:40.653 "data_size": 0 00:10:40.653 } 00:10:40.653 ] 00:10:40.653 }' 00:10:40.653 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.653 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.223 [2024-10-25 17:51:59.409644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.223 BaseBdev2 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.223 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.223 [ 00:10:41.223 { 00:10:41.223 "name": "BaseBdev2", 00:10:41.223 "aliases": [ 00:10:41.223 "60357065-1041-46db-a28f-2c7fa1a70c25" 00:10:41.223 ], 00:10:41.223 "product_name": "Malloc disk", 00:10:41.223 "block_size": 512, 00:10:41.223 "num_blocks": 65536, 00:10:41.223 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:41.223 "assigned_rate_limits": { 00:10:41.223 "rw_ios_per_sec": 0, 00:10:41.223 "rw_mbytes_per_sec": 0, 00:10:41.223 "r_mbytes_per_sec": 0, 00:10:41.223 "w_mbytes_per_sec": 0 00:10:41.223 }, 00:10:41.223 "claimed": true, 00:10:41.223 "claim_type": "exclusive_write", 00:10:41.223 "zoned": false, 00:10:41.223 "supported_io_types": { 00:10:41.223 "read": true, 00:10:41.223 "write": true, 00:10:41.223 "unmap": true, 00:10:41.223 "flush": true, 00:10:41.223 "reset": true, 00:10:41.223 "nvme_admin": false, 00:10:41.223 "nvme_io": false, 00:10:41.223 "nvme_io_md": false, 00:10:41.223 "write_zeroes": true, 00:10:41.223 "zcopy": true, 00:10:41.223 "get_zone_info": false, 00:10:41.223 "zone_management": false, 00:10:41.223 "zone_append": false, 00:10:41.223 "compare": false, 00:10:41.223 "compare_and_write": false, 00:10:41.223 "abort": true, 00:10:41.223 "seek_hole": false, 00:10:41.223 "seek_data": false, 00:10:41.223 "copy": true, 00:10:41.223 "nvme_iov_md": false 00:10:41.223 }, 00:10:41.223 "memory_domains": [ 00:10:41.223 { 00:10:41.223 "dma_device_id": "system", 00:10:41.223 "dma_device_type": 1 00:10:41.223 }, 00:10:41.223 { 00:10:41.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.224 "dma_device_type": 2 00:10:41.224 } 00:10:41.224 ], 00:10:41.224 "driver_specific": {} 00:10:41.224 } 00:10:41.224 ] 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.224 "name": "Existed_Raid", 00:10:41.224 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:41.224 "strip_size_kb": 64, 00:10:41.224 "state": "configuring", 00:10:41.224 "raid_level": "concat", 00:10:41.224 "superblock": true, 00:10:41.224 "num_base_bdevs": 4, 00:10:41.224 "num_base_bdevs_discovered": 2, 00:10:41.224 "num_base_bdevs_operational": 4, 00:10:41.224 "base_bdevs_list": [ 00:10:41.224 { 00:10:41.224 "name": "BaseBdev1", 00:10:41.224 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:41.224 "is_configured": true, 00:10:41.224 "data_offset": 2048, 00:10:41.224 "data_size": 63488 00:10:41.224 }, 00:10:41.224 { 00:10:41.224 "name": "BaseBdev2", 00:10:41.224 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:41.224 "is_configured": true, 00:10:41.224 "data_offset": 2048, 00:10:41.224 "data_size": 63488 00:10:41.224 }, 00:10:41.224 { 00:10:41.224 "name": "BaseBdev3", 00:10:41.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.224 "is_configured": false, 00:10:41.224 "data_offset": 0, 00:10:41.224 "data_size": 0 00:10:41.224 }, 00:10:41.224 { 00:10:41.224 "name": "BaseBdev4", 00:10:41.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.224 "is_configured": false, 00:10:41.224 "data_offset": 0, 00:10:41.224 "data_size": 0 00:10:41.224 } 00:10:41.224 ] 00:10:41.224 }' 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.224 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.483 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.483 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.483 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.744 [2024-10-25 17:51:59.954684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.744 BaseBdev3 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.744 [ 00:10:41.744 { 00:10:41.744 "name": "BaseBdev3", 00:10:41.744 "aliases": [ 00:10:41.744 "07df5d71-58a8-48d4-9c2f-55be88907283" 00:10:41.744 ], 00:10:41.744 "product_name": "Malloc disk", 00:10:41.744 "block_size": 512, 00:10:41.744 "num_blocks": 65536, 00:10:41.744 "uuid": "07df5d71-58a8-48d4-9c2f-55be88907283", 00:10:41.744 "assigned_rate_limits": { 00:10:41.744 "rw_ios_per_sec": 0, 00:10:41.744 "rw_mbytes_per_sec": 0, 00:10:41.744 "r_mbytes_per_sec": 0, 00:10:41.744 "w_mbytes_per_sec": 0 00:10:41.744 }, 00:10:41.744 "claimed": true, 00:10:41.744 "claim_type": "exclusive_write", 00:10:41.744 "zoned": false, 00:10:41.744 "supported_io_types": { 00:10:41.744 "read": true, 00:10:41.744 "write": true, 00:10:41.744 "unmap": true, 00:10:41.744 "flush": true, 00:10:41.744 "reset": true, 00:10:41.744 "nvme_admin": false, 00:10:41.744 "nvme_io": false, 00:10:41.744 "nvme_io_md": false, 00:10:41.744 "write_zeroes": true, 00:10:41.744 "zcopy": true, 00:10:41.744 "get_zone_info": false, 00:10:41.744 "zone_management": false, 00:10:41.744 "zone_append": false, 00:10:41.744 "compare": false, 00:10:41.744 "compare_and_write": false, 00:10:41.744 "abort": true, 00:10:41.744 "seek_hole": false, 00:10:41.744 "seek_data": false, 00:10:41.744 "copy": true, 00:10:41.744 "nvme_iov_md": false 00:10:41.744 }, 00:10:41.744 "memory_domains": [ 00:10:41.744 { 00:10:41.744 "dma_device_id": "system", 00:10:41.744 "dma_device_type": 1 00:10:41.744 }, 00:10:41.744 { 00:10:41.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.744 "dma_device_type": 2 00:10:41.744 } 00:10:41.744 ], 00:10:41.744 "driver_specific": {} 00:10:41.744 } 00:10:41.744 ] 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.744 17:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.744 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.744 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.744 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.744 "name": "Existed_Raid", 00:10:41.744 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:41.744 "strip_size_kb": 64, 00:10:41.744 "state": "configuring", 00:10:41.744 "raid_level": "concat", 00:10:41.744 "superblock": true, 00:10:41.744 "num_base_bdevs": 4, 00:10:41.744 "num_base_bdevs_discovered": 3, 00:10:41.744 "num_base_bdevs_operational": 4, 00:10:41.744 "base_bdevs_list": [ 00:10:41.744 { 00:10:41.744 "name": "BaseBdev1", 00:10:41.744 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:41.744 "is_configured": true, 00:10:41.744 "data_offset": 2048, 00:10:41.744 "data_size": 63488 00:10:41.744 }, 00:10:41.744 { 00:10:41.744 "name": "BaseBdev2", 00:10:41.744 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:41.744 "is_configured": true, 00:10:41.744 "data_offset": 2048, 00:10:41.744 "data_size": 63488 00:10:41.744 }, 00:10:41.744 { 00:10:41.744 "name": "BaseBdev3", 00:10:41.744 "uuid": "07df5d71-58a8-48d4-9c2f-55be88907283", 00:10:41.744 "is_configured": true, 00:10:41.744 "data_offset": 2048, 00:10:41.744 "data_size": 63488 00:10:41.744 }, 00:10:41.744 { 00:10:41.744 "name": "BaseBdev4", 00:10:41.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.744 "is_configured": false, 00:10:41.744 "data_offset": 0, 00:10:41.744 "data_size": 0 00:10:41.744 } 00:10:41.744 ] 00:10:41.744 }' 00:10:41.744 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.744 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 [2024-10-25 17:52:00.500493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.315 [2024-10-25 17:52:00.500875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.315 [2024-10-25 17:52:00.500931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.315 [2024-10-25 17:52:00.501236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:42.315 [2024-10-25 17:52:00.501429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.315 [2024-10-25 17:52:00.501477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:10:42.315 id_bdev 0x617000007e80 00:10:42.315 [2024-10-25 17:52:00.501666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.315 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 [ 00:10:42.315 { 00:10:42.315 "name": "BaseBdev4", 00:10:42.315 "aliases": [ 00:10:42.315 "34a4e809-63a9-420e-a4dc-1d66b68f66ae" 00:10:42.315 ], 00:10:42.315 "product_name": "Malloc disk", 00:10:42.315 "block_size": 512, 00:10:42.315 "num_blocks": 65536, 00:10:42.315 "uuid": "34a4e809-63a9-420e-a4dc-1d66b68f66ae", 00:10:42.315 "assigned_rate_limits": { 00:10:42.315 "rw_ios_per_sec": 0, 00:10:42.315 "rw_mbytes_per_sec": 0, 00:10:42.315 "r_mbytes_per_sec": 0, 00:10:42.315 "w_mbytes_per_sec": 0 00:10:42.315 }, 00:10:42.315 "claimed": true, 00:10:42.315 "claim_type": "exclusive_write", 00:10:42.315 "zoned": false, 00:10:42.315 "supported_io_types": { 00:10:42.315 "read": true, 00:10:42.315 "write": true, 00:10:42.315 "unmap": true, 00:10:42.315 "flush": true, 00:10:42.315 "reset": true, 00:10:42.315 "nvme_admin": false, 00:10:42.315 "nvme_io": false, 00:10:42.315 "nvme_io_md": false, 00:10:42.315 "write_zeroes": true, 00:10:42.315 "zcopy": true, 00:10:42.315 "get_zone_info": false, 00:10:42.316 "zone_management": false, 00:10:42.316 "zone_append": false, 00:10:42.316 "compare": false, 00:10:42.316 "compare_and_write": false, 00:10:42.316 "abort": true, 00:10:42.316 "seek_hole": false, 00:10:42.316 "seek_data": false, 00:10:42.316 "copy": true, 00:10:42.316 "nvme_iov_md": false 00:10:42.316 }, 00:10:42.316 "memory_domains": [ 00:10:42.316 { 00:10:42.316 "dma_device_id": "system", 00:10:42.316 "dma_device_type": 1 00:10:42.316 }, 00:10:42.316 { 00:10:42.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.316 "dma_device_type": 2 00:10:42.316 } 00:10:42.316 ], 00:10:42.316 "driver_specific": {} 00:10:42.316 } 00:10:42.316 ] 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.316 "name": "Existed_Raid", 00:10:42.316 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:42.316 "strip_size_kb": 64, 00:10:42.316 "state": "online", 00:10:42.316 "raid_level": "concat", 00:10:42.316 "superblock": true, 00:10:42.316 "num_base_bdevs": 4, 00:10:42.316 "num_base_bdevs_discovered": 4, 00:10:42.316 "num_base_bdevs_operational": 4, 00:10:42.316 "base_bdevs_list": [ 00:10:42.316 { 00:10:42.316 "name": "BaseBdev1", 00:10:42.316 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:42.316 "is_configured": true, 00:10:42.316 "data_offset": 2048, 00:10:42.316 "data_size": 63488 00:10:42.316 }, 00:10:42.316 { 00:10:42.316 "name": "BaseBdev2", 00:10:42.316 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:42.316 "is_configured": true, 00:10:42.316 "data_offset": 2048, 00:10:42.316 "data_size": 63488 00:10:42.316 }, 00:10:42.316 { 00:10:42.316 "name": "BaseBdev3", 00:10:42.316 "uuid": "07df5d71-58a8-48d4-9c2f-55be88907283", 00:10:42.316 "is_configured": true, 00:10:42.316 "data_offset": 2048, 00:10:42.316 "data_size": 63488 00:10:42.316 }, 00:10:42.316 { 00:10:42.316 "name": "BaseBdev4", 00:10:42.316 "uuid": "34a4e809-63a9-420e-a4dc-1d66b68f66ae", 00:10:42.316 "is_configured": true, 00:10:42.316 "data_offset": 2048, 00:10:42.316 "data_size": 63488 00:10:42.316 } 00:10:42.316 ] 00:10:42.316 }' 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.316 17:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.574 17:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.574 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.574 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.574 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.574 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.574 [2024-10-25 17:52:01.008210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.833 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.833 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.833 "name": "Existed_Raid", 00:10:42.833 "aliases": [ 00:10:42.833 "4e5013a1-ac47-4334-b258-5516cd2ef9eb" 00:10:42.833 ], 00:10:42.833 "product_name": "Raid Volume", 00:10:42.833 "block_size": 512, 00:10:42.833 "num_blocks": 253952, 00:10:42.833 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:42.833 "assigned_rate_limits": { 00:10:42.833 "rw_ios_per_sec": 0, 00:10:42.833 "rw_mbytes_per_sec": 0, 00:10:42.834 "r_mbytes_per_sec": 0, 00:10:42.834 "w_mbytes_per_sec": 0 00:10:42.834 }, 00:10:42.834 "claimed": false, 00:10:42.834 "zoned": false, 00:10:42.834 "supported_io_types": { 00:10:42.834 "read": true, 00:10:42.834 "write": true, 00:10:42.834 "unmap": true, 00:10:42.834 "flush": true, 00:10:42.834 "reset": true, 00:10:42.834 "nvme_admin": false, 00:10:42.834 "nvme_io": false, 00:10:42.834 "nvme_io_md": false, 00:10:42.834 "write_zeroes": true, 00:10:42.834 "zcopy": false, 00:10:42.834 "get_zone_info": false, 00:10:42.834 "zone_management": false, 00:10:42.834 "zone_append": false, 00:10:42.834 "compare": false, 00:10:42.834 "compare_and_write": false, 00:10:42.834 "abort": false, 00:10:42.834 "seek_hole": false, 00:10:42.834 "seek_data": false, 00:10:42.834 "copy": false, 00:10:42.834 "nvme_iov_md": false 00:10:42.834 }, 00:10:42.834 "memory_domains": [ 00:10:42.834 { 00:10:42.834 "dma_device_id": "system", 00:10:42.834 "dma_device_type": 1 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.834 "dma_device_type": 2 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "system", 00:10:42.834 "dma_device_type": 1 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.834 "dma_device_type": 2 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "system", 00:10:42.834 "dma_device_type": 1 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.834 "dma_device_type": 2 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "system", 00:10:42.834 "dma_device_type": 1 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.834 "dma_device_type": 2 00:10:42.834 } 00:10:42.834 ], 00:10:42.834 "driver_specific": { 00:10:42.834 "raid": { 00:10:42.834 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:42.834 "strip_size_kb": 64, 00:10:42.834 "state": "online", 00:10:42.834 "raid_level": "concat", 00:10:42.834 "superblock": true, 00:10:42.834 "num_base_bdevs": 4, 00:10:42.834 "num_base_bdevs_discovered": 4, 00:10:42.834 "num_base_bdevs_operational": 4, 00:10:42.834 "base_bdevs_list": [ 00:10:42.834 { 00:10:42.834 "name": "BaseBdev1", 00:10:42.834 "uuid": "657297f0-d087-4d03-8fc6-f3480fb66e3e", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "name": "BaseBdev2", 00:10:42.834 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "name": "BaseBdev3", 00:10:42.834 "uuid": "07df5d71-58a8-48d4-9c2f-55be88907283", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "name": "BaseBdev4", 00:10:42.834 "uuid": "34a4e809-63a9-420e-a4dc-1d66b68f66ae", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 } 00:10:42.834 ] 00:10:42.834 } 00:10:42.834 } 00:10:42.834 }' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:42.834 BaseBdev2 00:10:42.834 BaseBdev3 00:10:42.834 BaseBdev4' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.095 [2024-10-25 17:52:01.319345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.095 [2024-10-25 17:52:01.319422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.095 [2024-10-25 17:52:01.319492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.095 "name": "Existed_Raid", 00:10:43.095 "uuid": "4e5013a1-ac47-4334-b258-5516cd2ef9eb", 00:10:43.095 "strip_size_kb": 64, 00:10:43.095 "state": "offline", 00:10:43.095 "raid_level": "concat", 00:10:43.095 "superblock": true, 00:10:43.095 "num_base_bdevs": 4, 00:10:43.095 "num_base_bdevs_discovered": 3, 00:10:43.095 "num_base_bdevs_operational": 3, 00:10:43.095 "base_bdevs_list": [ 00:10:43.095 { 00:10:43.095 "name": null, 00:10:43.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.095 "is_configured": false, 00:10:43.095 "data_offset": 0, 00:10:43.095 "data_size": 63488 00:10:43.095 }, 00:10:43.095 { 00:10:43.095 "name": "BaseBdev2", 00:10:43.095 "uuid": "60357065-1041-46db-a28f-2c7fa1a70c25", 00:10:43.095 "is_configured": true, 00:10:43.095 "data_offset": 2048, 00:10:43.095 "data_size": 63488 00:10:43.095 }, 00:10:43.095 { 00:10:43.095 "name": "BaseBdev3", 00:10:43.095 "uuid": "07df5d71-58a8-48d4-9c2f-55be88907283", 00:10:43.095 "is_configured": true, 00:10:43.095 "data_offset": 2048, 00:10:43.095 "data_size": 63488 00:10:43.095 }, 00:10:43.095 { 00:10:43.095 "name": "BaseBdev4", 00:10:43.095 "uuid": "34a4e809-63a9-420e-a4dc-1d66b68f66ae", 00:10:43.095 "is_configured": true, 00:10:43.095 "data_offset": 2048, 00:10:43.095 "data_size": 63488 00:10:43.095 } 00:10:43.095 ] 00:10:43.095 }' 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.095 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.665 17:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 [2024-10-25 17:52:01.956269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.665 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 [2024-10-25 17:52:02.109497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.925 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 [2024-10-25 17:52:02.261474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:43.925 [2024-10-25 17:52:02.261525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.185 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 BaseBdev2 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 [ 00:10:44.186 { 00:10:44.186 "name": "BaseBdev2", 00:10:44.186 "aliases": [ 00:10:44.186 "cb546d1c-41d0-4898-9860-c83e31a5b128" 00:10:44.186 ], 00:10:44.186 "product_name": "Malloc disk", 00:10:44.186 "block_size": 512, 00:10:44.186 "num_blocks": 65536, 00:10:44.186 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:44.186 "assigned_rate_limits": { 00:10:44.186 "rw_ios_per_sec": 0, 00:10:44.186 "rw_mbytes_per_sec": 0, 00:10:44.186 "r_mbytes_per_sec": 0, 00:10:44.186 "w_mbytes_per_sec": 0 00:10:44.186 }, 00:10:44.186 "claimed": false, 00:10:44.186 "zoned": false, 00:10:44.186 "supported_io_types": { 00:10:44.186 "read": true, 00:10:44.186 "write": true, 00:10:44.186 "unmap": true, 00:10:44.186 "flush": true, 00:10:44.186 "reset": true, 00:10:44.186 "nvme_admin": false, 00:10:44.186 "nvme_io": false, 00:10:44.186 "nvme_io_md": false, 00:10:44.186 "write_zeroes": true, 00:10:44.186 "zcopy": true, 00:10:44.186 "get_zone_info": false, 00:10:44.186 "zone_management": false, 00:10:44.186 "zone_append": false, 00:10:44.186 "compare": false, 00:10:44.186 "compare_and_write": false, 00:10:44.186 "abort": true, 00:10:44.186 "seek_hole": false, 00:10:44.186 "seek_data": false, 00:10:44.186 "copy": true, 00:10:44.186 "nvme_iov_md": false 00:10:44.186 }, 00:10:44.186 "memory_domains": [ 00:10:44.186 { 00:10:44.186 "dma_device_id": "system", 00:10:44.186 "dma_device_type": 1 00:10:44.186 }, 00:10:44.186 { 00:10:44.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.186 "dma_device_type": 2 00:10:44.186 } 00:10:44.186 ], 00:10:44.186 "driver_specific": {} 00:10:44.186 } 00:10:44.186 ] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 BaseBdev3 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 [ 00:10:44.186 { 00:10:44.186 "name": "BaseBdev3", 00:10:44.186 "aliases": [ 00:10:44.186 "17c09c8a-e74e-499e-96a4-8eea52719568" 00:10:44.186 ], 00:10:44.186 "product_name": "Malloc disk", 00:10:44.186 "block_size": 512, 00:10:44.186 "num_blocks": 65536, 00:10:44.186 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:44.186 "assigned_rate_limits": { 00:10:44.186 "rw_ios_per_sec": 0, 00:10:44.186 "rw_mbytes_per_sec": 0, 00:10:44.186 "r_mbytes_per_sec": 0, 00:10:44.186 "w_mbytes_per_sec": 0 00:10:44.186 }, 00:10:44.186 "claimed": false, 00:10:44.186 "zoned": false, 00:10:44.186 "supported_io_types": { 00:10:44.186 "read": true, 00:10:44.186 "write": true, 00:10:44.186 "unmap": true, 00:10:44.186 "flush": true, 00:10:44.186 "reset": true, 00:10:44.186 "nvme_admin": false, 00:10:44.186 "nvme_io": false, 00:10:44.186 "nvme_io_md": false, 00:10:44.186 "write_zeroes": true, 00:10:44.186 "zcopy": true, 00:10:44.186 "get_zone_info": false, 00:10:44.186 "zone_management": false, 00:10:44.186 "zone_append": false, 00:10:44.186 "compare": false, 00:10:44.186 "compare_and_write": false, 00:10:44.186 "abort": true, 00:10:44.186 "seek_hole": false, 00:10:44.186 "seek_data": false, 00:10:44.186 "copy": true, 00:10:44.186 "nvme_iov_md": false 00:10:44.186 }, 00:10:44.186 "memory_domains": [ 00:10:44.186 { 00:10:44.186 "dma_device_id": "system", 00:10:44.186 "dma_device_type": 1 00:10:44.186 }, 00:10:44.186 { 00:10:44.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.186 "dma_device_type": 2 00:10:44.186 } 00:10:44.186 ], 00:10:44.186 "driver_specific": {} 00:10:44.186 } 00:10:44.186 ] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.186 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.186 BaseBdev4 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.446 [ 00:10:44.446 { 00:10:44.446 "name": "BaseBdev4", 00:10:44.446 "aliases": [ 00:10:44.446 "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451" 00:10:44.446 ], 00:10:44.446 "product_name": "Malloc disk", 00:10:44.446 "block_size": 512, 00:10:44.446 "num_blocks": 65536, 00:10:44.446 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:44.446 "assigned_rate_limits": { 00:10:44.446 "rw_ios_per_sec": 0, 00:10:44.446 "rw_mbytes_per_sec": 0, 00:10:44.446 "r_mbytes_per_sec": 0, 00:10:44.446 "w_mbytes_per_sec": 0 00:10:44.446 }, 00:10:44.446 "claimed": false, 00:10:44.446 "zoned": false, 00:10:44.446 "supported_io_types": { 00:10:44.446 "read": true, 00:10:44.446 "write": true, 00:10:44.446 "unmap": true, 00:10:44.446 "flush": true, 00:10:44.446 "reset": true, 00:10:44.446 "nvme_admin": false, 00:10:44.446 "nvme_io": false, 00:10:44.446 "nvme_io_md": false, 00:10:44.446 "write_zeroes": true, 00:10:44.446 "zcopy": true, 00:10:44.446 "get_zone_info": false, 00:10:44.446 "zone_management": false, 00:10:44.446 "zone_append": false, 00:10:44.446 "compare": false, 00:10:44.446 "compare_and_write": false, 00:10:44.446 "abort": true, 00:10:44.446 "seek_hole": false, 00:10:44.446 "seek_data": false, 00:10:44.446 "copy": true, 00:10:44.446 "nvme_iov_md": false 00:10:44.446 }, 00:10:44.446 "memory_domains": [ 00:10:44.446 { 00:10:44.446 "dma_device_id": "system", 00:10:44.446 "dma_device_type": 1 00:10:44.446 }, 00:10:44.446 { 00:10:44.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.446 "dma_device_type": 2 00:10:44.446 } 00:10:44.446 ], 00:10:44.446 "driver_specific": {} 00:10:44.446 } 00:10:44.446 ] 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.446 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.447 [2024-10-25 17:52:02.667313] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.447 [2024-10-25 17:52:02.667376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.447 [2024-10-25 17:52:02.667401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.447 [2024-10-25 17:52:02.669559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.447 [2024-10-25 17:52:02.669633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.447 "name": "Existed_Raid", 00:10:44.447 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:44.447 "strip_size_kb": 64, 00:10:44.447 "state": "configuring", 00:10:44.447 "raid_level": "concat", 00:10:44.447 "superblock": true, 00:10:44.447 "num_base_bdevs": 4, 00:10:44.447 "num_base_bdevs_discovered": 3, 00:10:44.447 "num_base_bdevs_operational": 4, 00:10:44.447 "base_bdevs_list": [ 00:10:44.447 { 00:10:44.447 "name": "BaseBdev1", 00:10:44.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.447 "is_configured": false, 00:10:44.447 "data_offset": 0, 00:10:44.447 "data_size": 0 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": "BaseBdev2", 00:10:44.447 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:44.447 "is_configured": true, 00:10:44.447 "data_offset": 2048, 00:10:44.447 "data_size": 63488 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": "BaseBdev3", 00:10:44.447 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:44.447 "is_configured": true, 00:10:44.447 "data_offset": 2048, 00:10:44.447 "data_size": 63488 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": "BaseBdev4", 00:10:44.447 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:44.447 "is_configured": true, 00:10:44.447 "data_offset": 2048, 00:10:44.447 "data_size": 63488 00:10:44.447 } 00:10:44.447 ] 00:10:44.447 }' 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.447 17:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.706 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.707 [2024-10-25 17:52:03.126560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.707 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.967 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.967 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.967 "name": "Existed_Raid", 00:10:44.967 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:44.967 "strip_size_kb": 64, 00:10:44.967 "state": "configuring", 00:10:44.967 "raid_level": "concat", 00:10:44.967 "superblock": true, 00:10:44.967 "num_base_bdevs": 4, 00:10:44.967 "num_base_bdevs_discovered": 2, 00:10:44.967 "num_base_bdevs_operational": 4, 00:10:44.967 "base_bdevs_list": [ 00:10:44.967 { 00:10:44.967 "name": "BaseBdev1", 00:10:44.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.967 "is_configured": false, 00:10:44.967 "data_offset": 0, 00:10:44.967 "data_size": 0 00:10:44.967 }, 00:10:44.967 { 00:10:44.967 "name": null, 00:10:44.967 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:44.967 "is_configured": false, 00:10:44.967 "data_offset": 0, 00:10:44.967 "data_size": 63488 00:10:44.967 }, 00:10:44.967 { 00:10:44.967 "name": "BaseBdev3", 00:10:44.967 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:44.967 "is_configured": true, 00:10:44.967 "data_offset": 2048, 00:10:44.967 "data_size": 63488 00:10:44.967 }, 00:10:44.967 { 00:10:44.967 "name": "BaseBdev4", 00:10:44.967 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:44.967 "is_configured": true, 00:10:44.967 "data_offset": 2048, 00:10:44.967 "data_size": 63488 00:10:44.967 } 00:10:44.967 ] 00:10:44.967 }' 00:10:44.967 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.967 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.227 [2024-10-25 17:52:03.659225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.227 BaseBdev1 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.227 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.486 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.486 [ 00:10:45.486 { 00:10:45.486 "name": "BaseBdev1", 00:10:45.486 "aliases": [ 00:10:45.487 "51216d75-0e78-4d24-8953-5fb76e387d0a" 00:10:45.487 ], 00:10:45.487 "product_name": "Malloc disk", 00:10:45.487 "block_size": 512, 00:10:45.487 "num_blocks": 65536, 00:10:45.487 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:45.487 "assigned_rate_limits": { 00:10:45.487 "rw_ios_per_sec": 0, 00:10:45.487 "rw_mbytes_per_sec": 0, 00:10:45.487 "r_mbytes_per_sec": 0, 00:10:45.487 "w_mbytes_per_sec": 0 00:10:45.487 }, 00:10:45.487 "claimed": true, 00:10:45.487 "claim_type": "exclusive_write", 00:10:45.487 "zoned": false, 00:10:45.487 "supported_io_types": { 00:10:45.487 "read": true, 00:10:45.487 "write": true, 00:10:45.487 "unmap": true, 00:10:45.487 "flush": true, 00:10:45.487 "reset": true, 00:10:45.487 "nvme_admin": false, 00:10:45.487 "nvme_io": false, 00:10:45.487 "nvme_io_md": false, 00:10:45.487 "write_zeroes": true, 00:10:45.487 "zcopy": true, 00:10:45.487 "get_zone_info": false, 00:10:45.487 "zone_management": false, 00:10:45.487 "zone_append": false, 00:10:45.487 "compare": false, 00:10:45.487 "compare_and_write": false, 00:10:45.487 "abort": true, 00:10:45.487 "seek_hole": false, 00:10:45.487 "seek_data": false, 00:10:45.487 "copy": true, 00:10:45.487 "nvme_iov_md": false 00:10:45.487 }, 00:10:45.487 "memory_domains": [ 00:10:45.487 { 00:10:45.487 "dma_device_id": "system", 00:10:45.487 "dma_device_type": 1 00:10:45.487 }, 00:10:45.487 { 00:10:45.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.487 "dma_device_type": 2 00:10:45.487 } 00:10:45.487 ], 00:10:45.487 "driver_specific": {} 00:10:45.487 } 00:10:45.487 ] 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.487 "name": "Existed_Raid", 00:10:45.487 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:45.487 "strip_size_kb": 64, 00:10:45.487 "state": "configuring", 00:10:45.487 "raid_level": "concat", 00:10:45.487 "superblock": true, 00:10:45.487 "num_base_bdevs": 4, 00:10:45.487 "num_base_bdevs_discovered": 3, 00:10:45.487 "num_base_bdevs_operational": 4, 00:10:45.487 "base_bdevs_list": [ 00:10:45.487 { 00:10:45.487 "name": "BaseBdev1", 00:10:45.487 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:45.487 "is_configured": true, 00:10:45.487 "data_offset": 2048, 00:10:45.487 "data_size": 63488 00:10:45.487 }, 00:10:45.487 { 00:10:45.487 "name": null, 00:10:45.487 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:45.487 "is_configured": false, 00:10:45.487 "data_offset": 0, 00:10:45.487 "data_size": 63488 00:10:45.487 }, 00:10:45.487 { 00:10:45.487 "name": "BaseBdev3", 00:10:45.487 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:45.487 "is_configured": true, 00:10:45.487 "data_offset": 2048, 00:10:45.487 "data_size": 63488 00:10:45.487 }, 00:10:45.487 { 00:10:45.487 "name": "BaseBdev4", 00:10:45.487 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:45.487 "is_configured": true, 00:10:45.487 "data_offset": 2048, 00:10:45.487 "data_size": 63488 00:10:45.487 } 00:10:45.487 ] 00:10:45.487 }' 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.487 17:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.746 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.746 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.746 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.746 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.006 [2024-10-25 17:52:04.210423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.006 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.007 "name": "Existed_Raid", 00:10:46.007 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:46.007 "strip_size_kb": 64, 00:10:46.007 "state": "configuring", 00:10:46.007 "raid_level": "concat", 00:10:46.007 "superblock": true, 00:10:46.007 "num_base_bdevs": 4, 00:10:46.007 "num_base_bdevs_discovered": 2, 00:10:46.007 "num_base_bdevs_operational": 4, 00:10:46.007 "base_bdevs_list": [ 00:10:46.007 { 00:10:46.007 "name": "BaseBdev1", 00:10:46.007 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:46.007 "is_configured": true, 00:10:46.007 "data_offset": 2048, 00:10:46.007 "data_size": 63488 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "name": null, 00:10:46.007 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:46.007 "is_configured": false, 00:10:46.007 "data_offset": 0, 00:10:46.007 "data_size": 63488 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "name": null, 00:10:46.007 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:46.007 "is_configured": false, 00:10:46.007 "data_offset": 0, 00:10:46.007 "data_size": 63488 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "name": "BaseBdev4", 00:10:46.007 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:46.007 "is_configured": true, 00:10:46.007 "data_offset": 2048, 00:10:46.007 "data_size": 63488 00:10:46.007 } 00:10:46.007 ] 00:10:46.007 }' 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.007 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.267 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.267 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.267 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 [2024-10-25 17:52:04.709530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.528 "name": "Existed_Raid", 00:10:46.528 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:46.528 "strip_size_kb": 64, 00:10:46.528 "state": "configuring", 00:10:46.528 "raid_level": "concat", 00:10:46.528 "superblock": true, 00:10:46.528 "num_base_bdevs": 4, 00:10:46.528 "num_base_bdevs_discovered": 3, 00:10:46.528 "num_base_bdevs_operational": 4, 00:10:46.528 "base_bdevs_list": [ 00:10:46.528 { 00:10:46.528 "name": "BaseBdev1", 00:10:46.528 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:46.528 "is_configured": true, 00:10:46.528 "data_offset": 2048, 00:10:46.528 "data_size": 63488 00:10:46.528 }, 00:10:46.528 { 00:10:46.528 "name": null, 00:10:46.528 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:46.528 "is_configured": false, 00:10:46.528 "data_offset": 0, 00:10:46.528 "data_size": 63488 00:10:46.528 }, 00:10:46.528 { 00:10:46.528 "name": "BaseBdev3", 00:10:46.528 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:46.528 "is_configured": true, 00:10:46.528 "data_offset": 2048, 00:10:46.528 "data_size": 63488 00:10:46.528 }, 00:10:46.528 { 00:10:46.528 "name": "BaseBdev4", 00:10:46.528 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:46.528 "is_configured": true, 00:10:46.528 "data_offset": 2048, 00:10:46.528 "data_size": 63488 00:10:46.528 } 00:10:46.528 ] 00:10:46.528 }' 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.528 17:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.789 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.789 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.789 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.789 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.789 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.050 [2024-10-25 17:52:05.240717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.050 "name": "Existed_Raid", 00:10:47.050 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:47.050 "strip_size_kb": 64, 00:10:47.050 "state": "configuring", 00:10:47.050 "raid_level": "concat", 00:10:47.050 "superblock": true, 00:10:47.050 "num_base_bdevs": 4, 00:10:47.050 "num_base_bdevs_discovered": 2, 00:10:47.050 "num_base_bdevs_operational": 4, 00:10:47.050 "base_bdevs_list": [ 00:10:47.050 { 00:10:47.050 "name": null, 00:10:47.050 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:47.050 "is_configured": false, 00:10:47.050 "data_offset": 0, 00:10:47.050 "data_size": 63488 00:10:47.050 }, 00:10:47.050 { 00:10:47.050 "name": null, 00:10:47.050 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:47.050 "is_configured": false, 00:10:47.050 "data_offset": 0, 00:10:47.050 "data_size": 63488 00:10:47.050 }, 00:10:47.050 { 00:10:47.050 "name": "BaseBdev3", 00:10:47.050 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:47.050 "is_configured": true, 00:10:47.050 "data_offset": 2048, 00:10:47.050 "data_size": 63488 00:10:47.050 }, 00:10:47.050 { 00:10:47.050 "name": "BaseBdev4", 00:10:47.050 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:47.050 "is_configured": true, 00:10:47.050 "data_offset": 2048, 00:10:47.050 "data_size": 63488 00:10:47.050 } 00:10:47.050 ] 00:10:47.050 }' 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.050 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.648 [2024-10-25 17:52:05.824980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.648 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.649 "name": "Existed_Raid", 00:10:47.649 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:47.649 "strip_size_kb": 64, 00:10:47.649 "state": "configuring", 00:10:47.649 "raid_level": "concat", 00:10:47.649 "superblock": true, 00:10:47.649 "num_base_bdevs": 4, 00:10:47.649 "num_base_bdevs_discovered": 3, 00:10:47.649 "num_base_bdevs_operational": 4, 00:10:47.649 "base_bdevs_list": [ 00:10:47.649 { 00:10:47.649 "name": null, 00:10:47.649 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:47.649 "is_configured": false, 00:10:47.649 "data_offset": 0, 00:10:47.649 "data_size": 63488 00:10:47.649 }, 00:10:47.649 { 00:10:47.649 "name": "BaseBdev2", 00:10:47.649 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:47.649 "is_configured": true, 00:10:47.649 "data_offset": 2048, 00:10:47.649 "data_size": 63488 00:10:47.649 }, 00:10:47.649 { 00:10:47.649 "name": "BaseBdev3", 00:10:47.649 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:47.649 "is_configured": true, 00:10:47.649 "data_offset": 2048, 00:10:47.649 "data_size": 63488 00:10:47.649 }, 00:10:47.649 { 00:10:47.649 "name": "BaseBdev4", 00:10:47.649 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:47.649 "is_configured": true, 00:10:47.649 "data_offset": 2048, 00:10:47.649 "data_size": 63488 00:10:47.649 } 00:10:47.649 ] 00:10:47.649 }' 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.649 17:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.908 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 51216d75-0e78-4d24-8953-5fb76e387d0a 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.184 [2024-10-25 17:52:06.402118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.184 [2024-10-25 17:52:06.402501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.184 [2024-10-25 17:52:06.402561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.184 [2024-10-25 17:52:06.402893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:48.184 NewBaseBdev 00:10:48.184 [2024-10-25 17:52:06.403115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.184 [2024-10-25 17:52:06.403133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:48.184 [2024-10-25 17:52:06.403275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.184 [ 00:10:48.184 { 00:10:48.184 "name": "NewBaseBdev", 00:10:48.184 "aliases": [ 00:10:48.184 "51216d75-0e78-4d24-8953-5fb76e387d0a" 00:10:48.184 ], 00:10:48.184 "product_name": "Malloc disk", 00:10:48.184 "block_size": 512, 00:10:48.184 "num_blocks": 65536, 00:10:48.184 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:48.184 "assigned_rate_limits": { 00:10:48.184 "rw_ios_per_sec": 0, 00:10:48.184 "rw_mbytes_per_sec": 0, 00:10:48.184 "r_mbytes_per_sec": 0, 00:10:48.184 "w_mbytes_per_sec": 0 00:10:48.184 }, 00:10:48.184 "claimed": true, 00:10:48.184 "claim_type": "exclusive_write", 00:10:48.184 "zoned": false, 00:10:48.184 "supported_io_types": { 00:10:48.184 "read": true, 00:10:48.184 "write": true, 00:10:48.184 "unmap": true, 00:10:48.184 "flush": true, 00:10:48.184 "reset": true, 00:10:48.184 "nvme_admin": false, 00:10:48.184 "nvme_io": false, 00:10:48.184 "nvme_io_md": false, 00:10:48.184 "write_zeroes": true, 00:10:48.184 "zcopy": true, 00:10:48.184 "get_zone_info": false, 00:10:48.184 "zone_management": false, 00:10:48.184 "zone_append": false, 00:10:48.184 "compare": false, 00:10:48.184 "compare_and_write": false, 00:10:48.184 "abort": true, 00:10:48.184 "seek_hole": false, 00:10:48.184 "seek_data": false, 00:10:48.184 "copy": true, 00:10:48.184 "nvme_iov_md": false 00:10:48.184 }, 00:10:48.184 "memory_domains": [ 00:10:48.184 { 00:10:48.184 "dma_device_id": "system", 00:10:48.184 "dma_device_type": 1 00:10:48.184 }, 00:10:48.184 { 00:10:48.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.184 "dma_device_type": 2 00:10:48.184 } 00:10:48.184 ], 00:10:48.184 "driver_specific": {} 00:10:48.184 } 00:10:48.184 ] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.184 "name": "Existed_Raid", 00:10:48.184 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:48.184 "strip_size_kb": 64, 00:10:48.184 "state": "online", 00:10:48.184 "raid_level": "concat", 00:10:48.184 "superblock": true, 00:10:48.184 "num_base_bdevs": 4, 00:10:48.184 "num_base_bdevs_discovered": 4, 00:10:48.184 "num_base_bdevs_operational": 4, 00:10:48.184 "base_bdevs_list": [ 00:10:48.184 { 00:10:48.184 "name": "NewBaseBdev", 00:10:48.184 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:48.184 "is_configured": true, 00:10:48.184 "data_offset": 2048, 00:10:48.184 "data_size": 63488 00:10:48.184 }, 00:10:48.184 { 00:10:48.184 "name": "BaseBdev2", 00:10:48.184 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:48.184 "is_configured": true, 00:10:48.184 "data_offset": 2048, 00:10:48.184 "data_size": 63488 00:10:48.184 }, 00:10:48.184 { 00:10:48.184 "name": "BaseBdev3", 00:10:48.184 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:48.184 "is_configured": true, 00:10:48.184 "data_offset": 2048, 00:10:48.184 "data_size": 63488 00:10:48.184 }, 00:10:48.184 { 00:10:48.184 "name": "BaseBdev4", 00:10:48.184 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:48.184 "is_configured": true, 00:10:48.184 "data_offset": 2048, 00:10:48.184 "data_size": 63488 00:10:48.184 } 00:10:48.184 ] 00:10:48.184 }' 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.184 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 [2024-10-25 17:52:06.905788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.760 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.760 "name": "Existed_Raid", 00:10:48.760 "aliases": [ 00:10:48.760 "e4ab04f1-56e8-4524-8076-8cbed8fc46eb" 00:10:48.760 ], 00:10:48.760 "product_name": "Raid Volume", 00:10:48.760 "block_size": 512, 00:10:48.760 "num_blocks": 253952, 00:10:48.760 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:48.760 "assigned_rate_limits": { 00:10:48.760 "rw_ios_per_sec": 0, 00:10:48.760 "rw_mbytes_per_sec": 0, 00:10:48.760 "r_mbytes_per_sec": 0, 00:10:48.760 "w_mbytes_per_sec": 0 00:10:48.760 }, 00:10:48.760 "claimed": false, 00:10:48.760 "zoned": false, 00:10:48.760 "supported_io_types": { 00:10:48.760 "read": true, 00:10:48.760 "write": true, 00:10:48.760 "unmap": true, 00:10:48.760 "flush": true, 00:10:48.760 "reset": true, 00:10:48.760 "nvme_admin": false, 00:10:48.760 "nvme_io": false, 00:10:48.760 "nvme_io_md": false, 00:10:48.760 "write_zeroes": true, 00:10:48.760 "zcopy": false, 00:10:48.760 "get_zone_info": false, 00:10:48.760 "zone_management": false, 00:10:48.760 "zone_append": false, 00:10:48.760 "compare": false, 00:10:48.760 "compare_and_write": false, 00:10:48.760 "abort": false, 00:10:48.760 "seek_hole": false, 00:10:48.760 "seek_data": false, 00:10:48.760 "copy": false, 00:10:48.760 "nvme_iov_md": false 00:10:48.760 }, 00:10:48.760 "memory_domains": [ 00:10:48.760 { 00:10:48.760 "dma_device_id": "system", 00:10:48.760 "dma_device_type": 1 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.760 "dma_device_type": 2 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "system", 00:10:48.760 "dma_device_type": 1 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.760 "dma_device_type": 2 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "system", 00:10:48.760 "dma_device_type": 1 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.760 "dma_device_type": 2 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "system", 00:10:48.760 "dma_device_type": 1 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.760 "dma_device_type": 2 00:10:48.760 } 00:10:48.760 ], 00:10:48.760 "driver_specific": { 00:10:48.760 "raid": { 00:10:48.760 "uuid": "e4ab04f1-56e8-4524-8076-8cbed8fc46eb", 00:10:48.760 "strip_size_kb": 64, 00:10:48.760 "state": "online", 00:10:48.760 "raid_level": "concat", 00:10:48.760 "superblock": true, 00:10:48.760 "num_base_bdevs": 4, 00:10:48.760 "num_base_bdevs_discovered": 4, 00:10:48.760 "num_base_bdevs_operational": 4, 00:10:48.760 "base_bdevs_list": [ 00:10:48.760 { 00:10:48.760 "name": "NewBaseBdev", 00:10:48.760 "uuid": "51216d75-0e78-4d24-8953-5fb76e387d0a", 00:10:48.760 "is_configured": true, 00:10:48.760 "data_offset": 2048, 00:10:48.760 "data_size": 63488 00:10:48.760 }, 00:10:48.760 { 00:10:48.760 "name": "BaseBdev2", 00:10:48.761 "uuid": "cb546d1c-41d0-4898-9860-c83e31a5b128", 00:10:48.761 "is_configured": true, 00:10:48.761 "data_offset": 2048, 00:10:48.761 "data_size": 63488 00:10:48.761 }, 00:10:48.761 { 00:10:48.761 "name": "BaseBdev3", 00:10:48.761 "uuid": "17c09c8a-e74e-499e-96a4-8eea52719568", 00:10:48.761 "is_configured": true, 00:10:48.761 "data_offset": 2048, 00:10:48.761 "data_size": 63488 00:10:48.761 }, 00:10:48.761 { 00:10:48.761 "name": "BaseBdev4", 00:10:48.761 "uuid": "3ddaf93e-e312-4201-a2dc-e7e2fa3a4451", 00:10:48.761 "is_configured": true, 00:10:48.761 "data_offset": 2048, 00:10:48.761 "data_size": 63488 00:10:48.761 } 00:10:48.761 ] 00:10:48.761 } 00:10:48.761 } 00:10:48.761 }' 00:10:48.761 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.761 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.761 BaseBdev2 00:10:48.761 BaseBdev3 00:10:48.761 BaseBdev4' 00:10:48.761 17:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.761 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.020 [2024-10-25 17:52:07.244844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.020 [2024-10-25 17:52:07.244924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.020 [2024-10-25 17:52:07.245047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.020 [2024-10-25 17:52:07.245161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.020 [2024-10-25 17:52:07.245215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71672 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71672 ']' 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71672 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71672 00:10:49.020 killing process with pid 71672 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71672' 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71672 00:10:49.020 [2024-10-25 17:52:07.287637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.020 17:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71672 00:10:49.588 [2024-10-25 17:52:07.726144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.527 17:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.527 00:10:50.527 real 0m11.958s 00:10:50.527 user 0m18.940s 00:10:50.527 sys 0m2.170s 00:10:50.527 ************************************ 00:10:50.527 END TEST raid_state_function_test_sb 00:10:50.527 ************************************ 00:10:50.527 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.527 17:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.786 17:52:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:50.786 17:52:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:50.786 17:52:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.786 17:52:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.786 ************************************ 00:10:50.786 START TEST raid_superblock_test 00:10:50.786 ************************************ 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72344 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72344 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72344 ']' 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.786 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.787 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.787 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.787 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.787 [2024-10-25 17:52:09.114672] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:50.787 [2024-10-25 17:52:09.114811] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72344 ] 00:10:51.047 [2024-10-25 17:52:09.272321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.047 [2024-10-25 17:52:09.400947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.307 [2024-10-25 17:52:09.609895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.307 [2024-10-25 17:52:09.610061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.567 17:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.827 malloc1 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [2024-10-25 17:52:10.034343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.828 [2024-10-25 17:52:10.034475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.828 [2024-10-25 17:52:10.034522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:51.828 [2024-10-25 17:52:10.034559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.828 [2024-10-25 17:52:10.037047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.828 [2024-10-25 17:52:10.037129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.828 pt1 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 malloc2 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [2024-10-25 17:52:10.093951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.828 [2024-10-25 17:52:10.094009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.828 [2024-10-25 17:52:10.094032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:51.828 [2024-10-25 17:52:10.094042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.828 [2024-10-25 17:52:10.096317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.828 [2024-10-25 17:52:10.096357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.828 pt2 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 malloc3 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [2024-10-25 17:52:10.174399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.828 [2024-10-25 17:52:10.174500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.828 [2024-10-25 17:52:10.174540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.828 [2024-10-25 17:52:10.174593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.828 [2024-10-25 17:52:10.176797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.828 [2024-10-25 17:52:10.176906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.828 pt3 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 malloc4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [2024-10-25 17:52:10.235712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.828 [2024-10-25 17:52:10.235821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.828 [2024-10-25 17:52:10.235876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:51.828 [2024-10-25 17:52:10.235911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.828 [2024-10-25 17:52:10.238373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.828 [2024-10-25 17:52:10.238456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.828 pt4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [2024-10-25 17:52:10.247742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.828 [2024-10-25 17:52:10.249930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.828 [2024-10-25 17:52:10.250003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.828 [2024-10-25 17:52:10.250076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.828 [2024-10-25 17:52:10.250308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:51.828 [2024-10-25 17:52:10.250328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.828 [2024-10-25 17:52:10.250640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:51.828 [2024-10-25 17:52:10.250877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:51.828 [2024-10-25 17:52:10.250895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:51.828 [2024-10-25 17:52:10.251077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.828 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.089 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.089 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.089 "name": "raid_bdev1", 00:10:52.089 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:52.089 "strip_size_kb": 64, 00:10:52.089 "state": "online", 00:10:52.089 "raid_level": "concat", 00:10:52.089 "superblock": true, 00:10:52.089 "num_base_bdevs": 4, 00:10:52.089 "num_base_bdevs_discovered": 4, 00:10:52.089 "num_base_bdevs_operational": 4, 00:10:52.089 "base_bdevs_list": [ 00:10:52.089 { 00:10:52.089 "name": "pt1", 00:10:52.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.089 "is_configured": true, 00:10:52.089 "data_offset": 2048, 00:10:52.089 "data_size": 63488 00:10:52.089 }, 00:10:52.089 { 00:10:52.089 "name": "pt2", 00:10:52.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.089 "is_configured": true, 00:10:52.089 "data_offset": 2048, 00:10:52.089 "data_size": 63488 00:10:52.089 }, 00:10:52.089 { 00:10:52.089 "name": "pt3", 00:10:52.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.089 "is_configured": true, 00:10:52.089 "data_offset": 2048, 00:10:52.089 "data_size": 63488 00:10:52.089 }, 00:10:52.089 { 00:10:52.089 "name": "pt4", 00:10:52.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.089 "is_configured": true, 00:10:52.089 "data_offset": 2048, 00:10:52.089 "data_size": 63488 00:10:52.089 } 00:10:52.089 ] 00:10:52.089 }' 00:10:52.089 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.089 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.349 [2024-10-25 17:52:10.747293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.349 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.349 "name": "raid_bdev1", 00:10:52.349 "aliases": [ 00:10:52.349 "81150a36-3fb4-46dd-9027-7b74d7703c46" 00:10:52.349 ], 00:10:52.349 "product_name": "Raid Volume", 00:10:52.349 "block_size": 512, 00:10:52.349 "num_blocks": 253952, 00:10:52.349 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:52.349 "assigned_rate_limits": { 00:10:52.349 "rw_ios_per_sec": 0, 00:10:52.349 "rw_mbytes_per_sec": 0, 00:10:52.349 "r_mbytes_per_sec": 0, 00:10:52.349 "w_mbytes_per_sec": 0 00:10:52.349 }, 00:10:52.349 "claimed": false, 00:10:52.349 "zoned": false, 00:10:52.349 "supported_io_types": { 00:10:52.349 "read": true, 00:10:52.349 "write": true, 00:10:52.349 "unmap": true, 00:10:52.349 "flush": true, 00:10:52.349 "reset": true, 00:10:52.349 "nvme_admin": false, 00:10:52.349 "nvme_io": false, 00:10:52.349 "nvme_io_md": false, 00:10:52.349 "write_zeroes": true, 00:10:52.349 "zcopy": false, 00:10:52.349 "get_zone_info": false, 00:10:52.349 "zone_management": false, 00:10:52.349 "zone_append": false, 00:10:52.349 "compare": false, 00:10:52.349 "compare_and_write": false, 00:10:52.349 "abort": false, 00:10:52.349 "seek_hole": false, 00:10:52.349 "seek_data": false, 00:10:52.349 "copy": false, 00:10:52.349 "nvme_iov_md": false 00:10:52.349 }, 00:10:52.349 "memory_domains": [ 00:10:52.349 { 00:10:52.350 "dma_device_id": "system", 00:10:52.350 "dma_device_type": 1 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.350 "dma_device_type": 2 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "system", 00:10:52.350 "dma_device_type": 1 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.350 "dma_device_type": 2 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "system", 00:10:52.350 "dma_device_type": 1 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.350 "dma_device_type": 2 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "system", 00:10:52.350 "dma_device_type": 1 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.350 "dma_device_type": 2 00:10:52.350 } 00:10:52.350 ], 00:10:52.350 "driver_specific": { 00:10:52.350 "raid": { 00:10:52.350 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:52.350 "strip_size_kb": 64, 00:10:52.350 "state": "online", 00:10:52.350 "raid_level": "concat", 00:10:52.350 "superblock": true, 00:10:52.350 "num_base_bdevs": 4, 00:10:52.350 "num_base_bdevs_discovered": 4, 00:10:52.350 "num_base_bdevs_operational": 4, 00:10:52.350 "base_bdevs_list": [ 00:10:52.350 { 00:10:52.350 "name": "pt1", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.350 "is_configured": true, 00:10:52.350 "data_offset": 2048, 00:10:52.350 "data_size": 63488 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "pt2", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.350 "is_configured": true, 00:10:52.350 "data_offset": 2048, 00:10:52.350 "data_size": 63488 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "pt3", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.350 "is_configured": true, 00:10:52.350 "data_offset": 2048, 00:10:52.350 "data_size": 63488 00:10:52.350 }, 00:10:52.350 { 00:10:52.350 "name": "pt4", 00:10:52.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.350 "is_configured": true, 00:10:52.350 "data_offset": 2048, 00:10:52.350 "data_size": 63488 00:10:52.350 } 00:10:52.350 ] 00:10:52.350 } 00:10:52.350 } 00:10:52.350 }' 00:10:52.350 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.610 pt2 00:10:52.610 pt3 00:10:52.610 pt4' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.610 17:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.610 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.870 [2024-10-25 17:52:11.066729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.870 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81150a36-3fb4-46dd-9027-7b74d7703c46 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81150a36-3fb4-46dd-9027-7b74d7703c46 ']' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 [2024-10-25 17:52:11.098299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.871 [2024-10-25 17:52:11.098385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.871 [2024-10-25 17:52:11.098499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.871 [2024-10-25 17:52:11.098581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.871 [2024-10-25 17:52:11.098599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 [2024-10-25 17:52:11.274033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.871 [2024-10-25 17:52:11.276202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.871 [2024-10-25 17:52:11.276312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:52.871 [2024-10-25 17:52:11.276372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:52.871 [2024-10-25 17:52:11.276462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.871 [2024-10-25 17:52:11.276527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.871 [2024-10-25 17:52:11.276549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:52.871 [2024-10-25 17:52:11.276570] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:52.871 [2024-10-25 17:52:11.276585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.871 [2024-10-25 17:52:11.276597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:52.871 request: 00:10:52.871 { 00:10:52.871 "name": "raid_bdev1", 00:10:52.871 "raid_level": "concat", 00:10:52.871 "base_bdevs": [ 00:10:52.871 "malloc1", 00:10:52.871 "malloc2", 00:10:52.871 "malloc3", 00:10:52.871 "malloc4" 00:10:52.871 ], 00:10:52.871 "strip_size_kb": 64, 00:10:52.871 "superblock": false, 00:10:52.871 "method": "bdev_raid_create", 00:10:52.871 "req_id": 1 00:10:52.871 } 00:10:52.871 Got JSON-RPC error response 00:10:52.871 response: 00:10:52.871 { 00:10:52.871 "code": -17, 00:10:52.871 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.871 } 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.132 [2024-10-25 17:52:11.333939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.132 [2024-10-25 17:52:11.334066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.132 [2024-10-25 17:52:11.334111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.132 [2024-10-25 17:52:11.334146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.132 [2024-10-25 17:52:11.336565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.132 [2024-10-25 17:52:11.336661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.132 [2024-10-25 17:52:11.336793] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:53.132 [2024-10-25 17:52:11.336918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:53.132 pt1 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.132 "name": "raid_bdev1", 00:10:53.132 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:53.132 "strip_size_kb": 64, 00:10:53.132 "state": "configuring", 00:10:53.132 "raid_level": "concat", 00:10:53.132 "superblock": true, 00:10:53.132 "num_base_bdevs": 4, 00:10:53.132 "num_base_bdevs_discovered": 1, 00:10:53.132 "num_base_bdevs_operational": 4, 00:10:53.132 "base_bdevs_list": [ 00:10:53.132 { 00:10:53.132 "name": "pt1", 00:10:53.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.132 "is_configured": true, 00:10:53.132 "data_offset": 2048, 00:10:53.132 "data_size": 63488 00:10:53.132 }, 00:10:53.132 { 00:10:53.132 "name": null, 00:10:53.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.132 "is_configured": false, 00:10:53.132 "data_offset": 2048, 00:10:53.132 "data_size": 63488 00:10:53.132 }, 00:10:53.132 { 00:10:53.132 "name": null, 00:10:53.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.132 "is_configured": false, 00:10:53.132 "data_offset": 2048, 00:10:53.132 "data_size": 63488 00:10:53.132 }, 00:10:53.132 { 00:10:53.132 "name": null, 00:10:53.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.132 "is_configured": false, 00:10:53.132 "data_offset": 2048, 00:10:53.132 "data_size": 63488 00:10:53.132 } 00:10:53.132 ] 00:10:53.132 }' 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.132 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.423 [2024-10-25 17:52:11.817119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.423 [2024-10-25 17:52:11.817210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.423 [2024-10-25 17:52:11.817233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:53.423 [2024-10-25 17:52:11.817246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.423 [2024-10-25 17:52:11.817752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.423 [2024-10-25 17:52:11.817774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.423 [2024-10-25 17:52:11.817875] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.423 [2024-10-25 17:52:11.817904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.423 pt2 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.423 [2024-10-25 17:52:11.829078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.423 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.683 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.683 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.683 "name": "raid_bdev1", 00:10:53.683 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:53.683 "strip_size_kb": 64, 00:10:53.683 "state": "configuring", 00:10:53.683 "raid_level": "concat", 00:10:53.683 "superblock": true, 00:10:53.683 "num_base_bdevs": 4, 00:10:53.683 "num_base_bdevs_discovered": 1, 00:10:53.683 "num_base_bdevs_operational": 4, 00:10:53.683 "base_bdevs_list": [ 00:10:53.683 { 00:10:53.683 "name": "pt1", 00:10:53.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.683 "is_configured": true, 00:10:53.683 "data_offset": 2048, 00:10:53.683 "data_size": 63488 00:10:53.683 }, 00:10:53.683 { 00:10:53.683 "name": null, 00:10:53.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.683 "is_configured": false, 00:10:53.683 "data_offset": 0, 00:10:53.683 "data_size": 63488 00:10:53.683 }, 00:10:53.683 { 00:10:53.683 "name": null, 00:10:53.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.683 "is_configured": false, 00:10:53.683 "data_offset": 2048, 00:10:53.683 "data_size": 63488 00:10:53.683 }, 00:10:53.683 { 00:10:53.683 "name": null, 00:10:53.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.683 "is_configured": false, 00:10:53.683 "data_offset": 2048, 00:10:53.683 "data_size": 63488 00:10:53.683 } 00:10:53.683 ] 00:10:53.683 }' 00:10:53.683 17:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.683 17:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.943 [2024-10-25 17:52:12.280318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.943 [2024-10-25 17:52:12.280459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.943 [2024-10-25 17:52:12.280516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:53.943 [2024-10-25 17:52:12.280552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.943 [2024-10-25 17:52:12.281106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.943 [2024-10-25 17:52:12.281169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.943 [2024-10-25 17:52:12.281306] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.943 [2024-10-25 17:52:12.281362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.943 pt2 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.943 [2024-10-25 17:52:12.292270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.943 [2024-10-25 17:52:12.292374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.943 [2024-10-25 17:52:12.292429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:53.943 [2024-10-25 17:52:12.292471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.943 [2024-10-25 17:52:12.292954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.943 [2024-10-25 17:52:12.293013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.943 [2024-10-25 17:52:12.293122] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:53.943 [2024-10-25 17:52:12.293173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.943 pt3 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.943 [2024-10-25 17:52:12.300221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:53.943 [2024-10-25 17:52:12.300276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.943 [2024-10-25 17:52:12.300299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:53.943 [2024-10-25 17:52:12.300308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.943 [2024-10-25 17:52:12.300733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.943 [2024-10-25 17:52:12.300755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:53.943 [2024-10-25 17:52:12.300821] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:53.943 [2024-10-25 17:52:12.300852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:53.943 [2024-10-25 17:52:12.301004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.943 [2024-10-25 17:52:12.301018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.943 [2024-10-25 17:52:12.301287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:53.943 [2024-10-25 17:52:12.301448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.943 [2024-10-25 17:52:12.301462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:53.943 [2024-10-25 17:52:12.301608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.943 pt4 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.943 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.943 "name": "raid_bdev1", 00:10:53.943 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:53.943 "strip_size_kb": 64, 00:10:53.943 "state": "online", 00:10:53.943 "raid_level": "concat", 00:10:53.943 "superblock": true, 00:10:53.943 "num_base_bdevs": 4, 00:10:53.943 "num_base_bdevs_discovered": 4, 00:10:53.943 "num_base_bdevs_operational": 4, 00:10:53.943 "base_bdevs_list": [ 00:10:53.943 { 00:10:53.943 "name": "pt1", 00:10:53.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.943 "is_configured": true, 00:10:53.943 "data_offset": 2048, 00:10:53.943 "data_size": 63488 00:10:53.943 }, 00:10:53.943 { 00:10:53.943 "name": "pt2", 00:10:53.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.944 "is_configured": true, 00:10:53.944 "data_offset": 2048, 00:10:53.944 "data_size": 63488 00:10:53.944 }, 00:10:53.944 { 00:10:53.944 "name": "pt3", 00:10:53.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.944 "is_configured": true, 00:10:53.944 "data_offset": 2048, 00:10:53.944 "data_size": 63488 00:10:53.944 }, 00:10:53.944 { 00:10:53.944 "name": "pt4", 00:10:53.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.944 "is_configured": true, 00:10:53.944 "data_offset": 2048, 00:10:53.944 "data_size": 63488 00:10:53.944 } 00:10:53.944 ] 00:10:53.944 }' 00:10:53.944 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.944 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.512 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.513 [2024-10-25 17:52:12.736666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.513 "name": "raid_bdev1", 00:10:54.513 "aliases": [ 00:10:54.513 "81150a36-3fb4-46dd-9027-7b74d7703c46" 00:10:54.513 ], 00:10:54.513 "product_name": "Raid Volume", 00:10:54.513 "block_size": 512, 00:10:54.513 "num_blocks": 253952, 00:10:54.513 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:54.513 "assigned_rate_limits": { 00:10:54.513 "rw_ios_per_sec": 0, 00:10:54.513 "rw_mbytes_per_sec": 0, 00:10:54.513 "r_mbytes_per_sec": 0, 00:10:54.513 "w_mbytes_per_sec": 0 00:10:54.513 }, 00:10:54.513 "claimed": false, 00:10:54.513 "zoned": false, 00:10:54.513 "supported_io_types": { 00:10:54.513 "read": true, 00:10:54.513 "write": true, 00:10:54.513 "unmap": true, 00:10:54.513 "flush": true, 00:10:54.513 "reset": true, 00:10:54.513 "nvme_admin": false, 00:10:54.513 "nvme_io": false, 00:10:54.513 "nvme_io_md": false, 00:10:54.513 "write_zeroes": true, 00:10:54.513 "zcopy": false, 00:10:54.513 "get_zone_info": false, 00:10:54.513 "zone_management": false, 00:10:54.513 "zone_append": false, 00:10:54.513 "compare": false, 00:10:54.513 "compare_and_write": false, 00:10:54.513 "abort": false, 00:10:54.513 "seek_hole": false, 00:10:54.513 "seek_data": false, 00:10:54.513 "copy": false, 00:10:54.513 "nvme_iov_md": false 00:10:54.513 }, 00:10:54.513 "memory_domains": [ 00:10:54.513 { 00:10:54.513 "dma_device_id": "system", 00:10:54.513 "dma_device_type": 1 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.513 "dma_device_type": 2 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "system", 00:10:54.513 "dma_device_type": 1 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.513 "dma_device_type": 2 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "system", 00:10:54.513 "dma_device_type": 1 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.513 "dma_device_type": 2 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "system", 00:10:54.513 "dma_device_type": 1 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.513 "dma_device_type": 2 00:10:54.513 } 00:10:54.513 ], 00:10:54.513 "driver_specific": { 00:10:54.513 "raid": { 00:10:54.513 "uuid": "81150a36-3fb4-46dd-9027-7b74d7703c46", 00:10:54.513 "strip_size_kb": 64, 00:10:54.513 "state": "online", 00:10:54.513 "raid_level": "concat", 00:10:54.513 "superblock": true, 00:10:54.513 "num_base_bdevs": 4, 00:10:54.513 "num_base_bdevs_discovered": 4, 00:10:54.513 "num_base_bdevs_operational": 4, 00:10:54.513 "base_bdevs_list": [ 00:10:54.513 { 00:10:54.513 "name": "pt1", 00:10:54.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.513 "is_configured": true, 00:10:54.513 "data_offset": 2048, 00:10:54.513 "data_size": 63488 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "name": "pt2", 00:10:54.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.513 "is_configured": true, 00:10:54.513 "data_offset": 2048, 00:10:54.513 "data_size": 63488 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "name": "pt3", 00:10:54.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.513 "is_configured": true, 00:10:54.513 "data_offset": 2048, 00:10:54.513 "data_size": 63488 00:10:54.513 }, 00:10:54.513 { 00:10:54.513 "name": "pt4", 00:10:54.513 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:54.513 "is_configured": true, 00:10:54.513 "data_offset": 2048, 00:10:54.513 "data_size": 63488 00:10:54.513 } 00:10:54.513 ] 00:10:54.513 } 00:10:54.513 } 00:10:54.513 }' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:54.513 pt2 00:10:54.513 pt3 00:10:54.513 pt4' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.773 17:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:54.773 [2024-10-25 17:52:13.040624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81150a36-3fb4-46dd-9027-7b74d7703c46 '!=' 81150a36-3fb4-46dd-9027-7b74d7703c46 ']' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72344 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72344 ']' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72344 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72344 00:10:54.773 killing process with pid 72344 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72344' 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72344 00:10:54.773 [2024-10-25 17:52:13.122491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.773 [2024-10-25 17:52:13.122584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.773 17:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72344 00:10:54.773 [2024-10-25 17:52:13.122657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.773 [2024-10-25 17:52:13.122667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:55.342 [2024-10-25 17:52:13.556583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.722 17:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:56.722 00:10:56.722 real 0m5.758s 00:10:56.722 user 0m8.159s 00:10:56.722 sys 0m0.993s 00:10:56.722 ************************************ 00:10:56.722 END TEST raid_superblock_test 00:10:56.722 ************************************ 00:10:56.722 17:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.722 17:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.722 17:52:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:56.723 17:52:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:56.723 17:52:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.723 17:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.723 ************************************ 00:10:56.723 START TEST raid_read_error_test 00:10:56.723 ************************************ 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7I2fXZYrkd 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72614 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72614 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72614 ']' 00:10:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.723 17:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.723 [2024-10-25 17:52:14.971766] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:56.723 [2024-10-25 17:52:14.971919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72614 ] 00:10:56.723 [2024-10-25 17:52:15.151556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.982 [2024-10-25 17:52:15.274820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.242 [2024-10-25 17:52:15.484375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.242 [2024-10-25 17:52:15.484505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 BaseBdev1_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 true 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 [2024-10-25 17:52:15.875109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.610 [2024-10-25 17:52:15.875262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.610 [2024-10-25 17:52:15.875308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.610 [2024-10-25 17:52:15.875347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.610 [2024-10-25 17:52:15.877645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.610 [2024-10-25 17:52:15.877730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.610 BaseBdev1 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 BaseBdev2_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 true 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 [2024-10-25 17:52:15.946600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.610 [2024-10-25 17:52:15.946671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.610 [2024-10-25 17:52:15.946690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.610 [2024-10-25 17:52:15.946702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.610 [2024-10-25 17:52:15.949066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.610 [2024-10-25 17:52:15.949111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.610 BaseBdev2 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 BaseBdev3_malloc 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 true 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.610 [2024-10-25 17:52:16.030942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:57.610 [2024-10-25 17:52:16.031003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.610 [2024-10-25 17:52:16.031023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.610 [2024-10-25 17:52:16.031034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.610 [2024-10-25 17:52:16.033184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.610 [2024-10-25 17:52:16.033229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:57.610 BaseBdev3 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.610 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.875 BaseBdev4_malloc 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.875 true 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.875 [2024-10-25 17:52:16.097699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:57.875 [2024-10-25 17:52:16.097760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.875 [2024-10-25 17:52:16.097778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.875 [2024-10-25 17:52:16.097789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.875 [2024-10-25 17:52:16.099828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.875 [2024-10-25 17:52:16.099883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:57.875 BaseBdev4 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.875 [2024-10-25 17:52:16.109738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.875 [2024-10-25 17:52:16.111555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.875 [2024-10-25 17:52:16.111629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.875 [2024-10-25 17:52:16.111696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.875 [2024-10-25 17:52:16.111941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:57.875 [2024-10-25 17:52:16.111955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.875 [2024-10-25 17:52:16.112224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.875 [2024-10-25 17:52:16.112390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:57.875 [2024-10-25 17:52:16.112402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:57.875 [2024-10-25 17:52:16.112557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.875 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.875 "name": "raid_bdev1", 00:10:57.875 "uuid": "bad96ffa-1377-4cca-b543-6e808b05cecc", 00:10:57.875 "strip_size_kb": 64, 00:10:57.875 "state": "online", 00:10:57.875 "raid_level": "concat", 00:10:57.875 "superblock": true, 00:10:57.875 "num_base_bdevs": 4, 00:10:57.875 "num_base_bdevs_discovered": 4, 00:10:57.875 "num_base_bdevs_operational": 4, 00:10:57.875 "base_bdevs_list": [ 00:10:57.875 { 00:10:57.875 "name": "BaseBdev1", 00:10:57.875 "uuid": "3677e6ca-aed1-5474-b2c3-5d81adb0b2c3", 00:10:57.875 "is_configured": true, 00:10:57.875 "data_offset": 2048, 00:10:57.875 "data_size": 63488 00:10:57.875 }, 00:10:57.875 { 00:10:57.875 "name": "BaseBdev2", 00:10:57.875 "uuid": "bdffa2af-2975-553a-94f3-c7f2a65737c1", 00:10:57.875 "is_configured": true, 00:10:57.875 "data_offset": 2048, 00:10:57.875 "data_size": 63488 00:10:57.875 }, 00:10:57.875 { 00:10:57.875 "name": "BaseBdev3", 00:10:57.875 "uuid": "75e79cf6-2625-551e-8bda-fc1b5fedc10e", 00:10:57.875 "is_configured": true, 00:10:57.875 "data_offset": 2048, 00:10:57.875 "data_size": 63488 00:10:57.875 }, 00:10:57.875 { 00:10:57.875 "name": "BaseBdev4", 00:10:57.875 "uuid": "07dae356-c23d-505a-9228-f2abb1d7cacf", 00:10:57.876 "is_configured": true, 00:10:57.876 "data_offset": 2048, 00:10:57.876 "data_size": 63488 00:10:57.876 } 00:10:57.876 ] 00:10:57.876 }' 00:10:57.876 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.876 17:52:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.135 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.135 17:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.394 [2024-10-25 17:52:16.662341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:59.332 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:59.332 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.332 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.332 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.332 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.333 "name": "raid_bdev1", 00:10:59.333 "uuid": "bad96ffa-1377-4cca-b543-6e808b05cecc", 00:10:59.333 "strip_size_kb": 64, 00:10:59.333 "state": "online", 00:10:59.333 "raid_level": "concat", 00:10:59.333 "superblock": true, 00:10:59.333 "num_base_bdevs": 4, 00:10:59.333 "num_base_bdevs_discovered": 4, 00:10:59.333 "num_base_bdevs_operational": 4, 00:10:59.333 "base_bdevs_list": [ 00:10:59.333 { 00:10:59.333 "name": "BaseBdev1", 00:10:59.333 "uuid": "3677e6ca-aed1-5474-b2c3-5d81adb0b2c3", 00:10:59.333 "is_configured": true, 00:10:59.333 "data_offset": 2048, 00:10:59.333 "data_size": 63488 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "name": "BaseBdev2", 00:10:59.333 "uuid": "bdffa2af-2975-553a-94f3-c7f2a65737c1", 00:10:59.333 "is_configured": true, 00:10:59.333 "data_offset": 2048, 00:10:59.333 "data_size": 63488 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "name": "BaseBdev3", 00:10:59.333 "uuid": "75e79cf6-2625-551e-8bda-fc1b5fedc10e", 00:10:59.333 "is_configured": true, 00:10:59.333 "data_offset": 2048, 00:10:59.333 "data_size": 63488 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "name": "BaseBdev4", 00:10:59.333 "uuid": "07dae356-c23d-505a-9228-f2abb1d7cacf", 00:10:59.333 "is_configured": true, 00:10:59.333 "data_offset": 2048, 00:10:59.333 "data_size": 63488 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }' 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.333 17:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.592 [2024-10-25 17:52:18.011232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.592 [2024-10-25 17:52:18.011266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.592 [2024-10-25 17:52:18.014124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.592 [2024-10-25 17:52:18.014187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.592 [2024-10-25 17:52:18.014233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.592 [2024-10-25 17:52:18.014247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:59.592 { 00:10:59.592 "results": [ 00:10:59.592 { 00:10:59.592 "job": "raid_bdev1", 00:10:59.592 "core_mask": "0x1", 00:10:59.592 "workload": "randrw", 00:10:59.592 "percentage": 50, 00:10:59.592 "status": "finished", 00:10:59.592 "queue_depth": 1, 00:10:59.592 "io_size": 131072, 00:10:59.592 "runtime": 1.348784, 00:10:59.592 "iops": 13647.848728929168, 00:10:59.592 "mibps": 1705.981091116146, 00:10:59.592 "io_failed": 1, 00:10:59.592 "io_timeout": 0, 00:10:59.592 "avg_latency_us": 101.58691906204034, 00:10:59.592 "min_latency_us": 26.606113537117903, 00:10:59.592 "max_latency_us": 1767.1825327510917 00:10:59.592 } 00:10:59.592 ], 00:10:59.592 "core_count": 1 00:10:59.592 } 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72614 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72614 ']' 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72614 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.592 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72614 00:10:59.852 killing process with pid 72614 00:10:59.852 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.852 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.852 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72614' 00:10:59.852 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72614 00:10:59.852 [2024-10-25 17:52:18.057620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.852 17:52:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72614 00:11:00.113 [2024-10-25 17:52:18.437803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7I2fXZYrkd 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:01.497 00:11:01.497 real 0m5.005s 00:11:01.497 user 0m5.823s 00:11:01.497 sys 0m0.615s 00:11:01.497 ************************************ 00:11:01.497 END TEST raid_read_error_test 00:11:01.497 ************************************ 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.497 17:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.497 17:52:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:01.497 17:52:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:01.497 17:52:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.497 17:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.497 ************************************ 00:11:01.497 START TEST raid_write_error_test 00:11:01.497 ************************************ 00:11:01.497 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:01.497 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:01.497 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:01.497 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:01.756 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:01.756 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.756 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J9M20DGf9z 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72761 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72761 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72761 ']' 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.757 17:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.757 [2024-10-25 17:52:20.047449] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:01.757 [2024-10-25 17:52:20.047581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72761 ] 00:11:02.018 [2024-10-25 17:52:20.228970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.018 [2024-10-25 17:52:20.356030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.277 [2024-10-25 17:52:20.576245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.277 [2024-10-25 17:52:20.576312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.539 BaseBdev1_malloc 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.539 true 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.539 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 [2024-10-25 17:52:20.978906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:02.801 [2024-10-25 17:52:20.979008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.801 [2024-10-25 17:52:20.979047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:02.801 [2024-10-25 17:52:20.979078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.801 [2024-10-25 17:52:20.981477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.801 [2024-10-25 17:52:20.981554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:02.801 BaseBdev1 00:11:02.801 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.801 17:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.801 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 BaseBdev2_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 true 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 [2024-10-25 17:52:21.047637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:02.801 [2024-10-25 17:52:21.047696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.801 [2024-10-25 17:52:21.047714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:02.801 [2024-10-25 17:52:21.047725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.801 [2024-10-25 17:52:21.050152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.801 [2024-10-25 17:52:21.050256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:02.801 BaseBdev2 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 BaseBdev3_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 true 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 [2024-10-25 17:52:21.131707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:02.801 [2024-10-25 17:52:21.131761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.801 [2024-10-25 17:52:21.131780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:02.801 [2024-10-25 17:52:21.131790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.801 [2024-10-25 17:52:21.134058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.801 [2024-10-25 17:52:21.134107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:02.801 BaseBdev3 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 BaseBdev4_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 true 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 [2024-10-25 17:52:21.200883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:02.801 [2024-10-25 17:52:21.200994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.801 [2024-10-25 17:52:21.201038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:02.801 [2024-10-25 17:52:21.201052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.801 [2024-10-25 17:52:21.203469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.801 [2024-10-25 17:52:21.203512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:02.801 BaseBdev4 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.801 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.801 [2024-10-25 17:52:21.212931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.801 [2024-10-25 17:52:21.214989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.801 [2024-10-25 17:52:21.215074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.801 [2024-10-25 17:52:21.215155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.801 [2024-10-25 17:52:21.215427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:02.802 [2024-10-25 17:52:21.215443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.802 [2024-10-25 17:52:21.215727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.802 [2024-10-25 17:52:21.215921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:02.802 [2024-10-25 17:52:21.215934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:02.802 [2024-10-25 17:52:21.216146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.802 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.063 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.063 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.063 "name": "raid_bdev1", 00:11:03.063 "uuid": "cbba627f-04cb-4a3a-9cfb-7f0ad28d8ae1", 00:11:03.063 "strip_size_kb": 64, 00:11:03.063 "state": "online", 00:11:03.063 "raid_level": "concat", 00:11:03.063 "superblock": true, 00:11:03.063 "num_base_bdevs": 4, 00:11:03.063 "num_base_bdevs_discovered": 4, 00:11:03.063 "num_base_bdevs_operational": 4, 00:11:03.063 "base_bdevs_list": [ 00:11:03.063 { 00:11:03.063 "name": "BaseBdev1", 00:11:03.063 "uuid": "fa053df2-a01b-5d32-b58b-4cb4b6544227", 00:11:03.063 "is_configured": true, 00:11:03.063 "data_offset": 2048, 00:11:03.063 "data_size": 63488 00:11:03.063 }, 00:11:03.063 { 00:11:03.063 "name": "BaseBdev2", 00:11:03.063 "uuid": "e2ae7bd1-2eb7-5005-bf0a-ac2648ad3377", 00:11:03.063 "is_configured": true, 00:11:03.063 "data_offset": 2048, 00:11:03.063 "data_size": 63488 00:11:03.063 }, 00:11:03.063 { 00:11:03.063 "name": "BaseBdev3", 00:11:03.063 "uuid": "58f1c457-2511-5693-89d4-1881fc48e2ff", 00:11:03.063 "is_configured": true, 00:11:03.063 "data_offset": 2048, 00:11:03.063 "data_size": 63488 00:11:03.063 }, 00:11:03.063 { 00:11:03.063 "name": "BaseBdev4", 00:11:03.063 "uuid": "72093a2d-1fa7-5c47-8bc7-6a0b3fa459a1", 00:11:03.063 "is_configured": true, 00:11:03.063 "data_offset": 2048, 00:11:03.063 "data_size": 63488 00:11:03.063 } 00:11:03.063 ] 00:11:03.063 }' 00:11:03.063 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.063 17:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.323 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:03.323 17:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.583 [2024-10-25 17:52:21.801353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.523 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.524 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.524 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.524 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.524 "name": "raid_bdev1", 00:11:04.524 "uuid": "cbba627f-04cb-4a3a-9cfb-7f0ad28d8ae1", 00:11:04.524 "strip_size_kb": 64, 00:11:04.524 "state": "online", 00:11:04.524 "raid_level": "concat", 00:11:04.524 "superblock": true, 00:11:04.524 "num_base_bdevs": 4, 00:11:04.524 "num_base_bdevs_discovered": 4, 00:11:04.524 "num_base_bdevs_operational": 4, 00:11:04.524 "base_bdevs_list": [ 00:11:04.524 { 00:11:04.524 "name": "BaseBdev1", 00:11:04.524 "uuid": "fa053df2-a01b-5d32-b58b-4cb4b6544227", 00:11:04.524 "is_configured": true, 00:11:04.524 "data_offset": 2048, 00:11:04.524 "data_size": 63488 00:11:04.524 }, 00:11:04.524 { 00:11:04.524 "name": "BaseBdev2", 00:11:04.524 "uuid": "e2ae7bd1-2eb7-5005-bf0a-ac2648ad3377", 00:11:04.524 "is_configured": true, 00:11:04.524 "data_offset": 2048, 00:11:04.524 "data_size": 63488 00:11:04.524 }, 00:11:04.524 { 00:11:04.524 "name": "BaseBdev3", 00:11:04.524 "uuid": "58f1c457-2511-5693-89d4-1881fc48e2ff", 00:11:04.524 "is_configured": true, 00:11:04.524 "data_offset": 2048, 00:11:04.524 "data_size": 63488 00:11:04.524 }, 00:11:04.524 { 00:11:04.524 "name": "BaseBdev4", 00:11:04.524 "uuid": "72093a2d-1fa7-5c47-8bc7-6a0b3fa459a1", 00:11:04.524 "is_configured": true, 00:11:04.524 "data_offset": 2048, 00:11:04.524 "data_size": 63488 00:11:04.524 } 00:11:04.524 ] 00:11:04.524 }' 00:11:04.524 17:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.524 17:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.784 [2024-10-25 17:52:23.169791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.784 [2024-10-25 17:52:23.169921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.784 [2024-10-25 17:52:23.173066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.784 [2024-10-25 17:52:23.173179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.784 [2024-10-25 17:52:23.173263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.784 [2024-10-25 17:52:23.173322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:04.784 { 00:11:04.784 "results": [ 00:11:04.784 { 00:11:04.784 "job": "raid_bdev1", 00:11:04.784 "core_mask": "0x1", 00:11:04.784 "workload": "randrw", 00:11:04.784 "percentage": 50, 00:11:04.784 "status": "finished", 00:11:04.784 "queue_depth": 1, 00:11:04.784 "io_size": 131072, 00:11:04.784 "runtime": 1.369255, 00:11:04.784 "iops": 14048.515433575192, 00:11:04.784 "mibps": 1756.064429196899, 00:11:04.784 "io_failed": 1, 00:11:04.784 "io_timeout": 0, 00:11:04.784 "avg_latency_us": 98.8539514350189, 00:11:04.784 "min_latency_us": 26.829694323144103, 00:11:04.784 "max_latency_us": 1681.3275109170306 00:11:04.784 } 00:11:04.784 ], 00:11:04.784 "core_count": 1 00:11:04.784 } 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72761 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72761 ']' 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72761 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72761 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72761' 00:11:04.784 killing process with pid 72761 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72761 00:11:04.784 [2024-10-25 17:52:23.215505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.784 17:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72761 00:11:05.352 [2024-10-25 17:52:23.577138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J9M20DGf9z 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:06.817 ************************************ 00:11:06.817 END TEST raid_write_error_test 00:11:06.817 ************************************ 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:06.817 00:11:06.817 real 0m4.988s 00:11:06.817 user 0m5.913s 00:11:06.817 sys 0m0.588s 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.817 17:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 17:52:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:06.817 17:52:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:06.817 17:52:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:06.817 17:52:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.817 17:52:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 ************************************ 00:11:06.817 START TEST raid_state_function_test 00:11:06.817 ************************************ 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72912 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72912' 00:11:06.817 Process raid pid: 72912 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72912 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72912 ']' 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.817 17:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 [2024-10-25 17:52:25.082027] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:06.817 [2024-10-25 17:52:25.082232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.817 [2024-10-25 17:52:25.248957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.077 [2024-10-25 17:52:25.380874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.337 [2024-10-25 17:52:25.629431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.337 [2024-10-25 17:52:25.629575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.598 [2024-10-25 17:52:25.986823] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.598 [2024-10-25 17:52:25.986948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.598 [2024-10-25 17:52:25.986966] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.598 [2024-10-25 17:52:25.986979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.598 [2024-10-25 17:52:25.986987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.598 [2024-10-25 17:52:25.986998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.598 [2024-10-25 17:52:25.987005] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:07.598 [2024-10-25 17:52:25.987015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.598 17:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.598 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.598 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.857 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.857 "name": "Existed_Raid", 00:11:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.857 "strip_size_kb": 0, 00:11:07.857 "state": "configuring", 00:11:07.857 "raid_level": "raid1", 00:11:07.857 "superblock": false, 00:11:07.857 "num_base_bdevs": 4, 00:11:07.857 "num_base_bdevs_discovered": 0, 00:11:07.857 "num_base_bdevs_operational": 4, 00:11:07.857 "base_bdevs_list": [ 00:11:07.857 { 00:11:07.857 "name": "BaseBdev1", 00:11:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.857 "is_configured": false, 00:11:07.857 "data_offset": 0, 00:11:07.857 "data_size": 0 00:11:07.857 }, 00:11:07.857 { 00:11:07.857 "name": "BaseBdev2", 00:11:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.857 "is_configured": false, 00:11:07.857 "data_offset": 0, 00:11:07.857 "data_size": 0 00:11:07.857 }, 00:11:07.857 { 00:11:07.857 "name": "BaseBdev3", 00:11:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.857 "is_configured": false, 00:11:07.857 "data_offset": 0, 00:11:07.857 "data_size": 0 00:11:07.857 }, 00:11:07.857 { 00:11:07.857 "name": "BaseBdev4", 00:11:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.857 "is_configured": false, 00:11:07.857 "data_offset": 0, 00:11:07.857 "data_size": 0 00:11:07.857 } 00:11:07.857 ] 00:11:07.857 }' 00:11:07.857 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.857 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.118 [2024-10-25 17:52:26.481938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.118 [2024-10-25 17:52:26.482047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.118 [2024-10-25 17:52:26.493914] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.118 [2024-10-25 17:52:26.493998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.118 [2024-10-25 17:52:26.494028] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.118 [2024-10-25 17:52:26.494056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.118 [2024-10-25 17:52:26.494077] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.118 [2024-10-25 17:52:26.494101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.118 [2024-10-25 17:52:26.494121] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.118 [2024-10-25 17:52:26.494157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.118 [2024-10-25 17:52:26.544318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.118 BaseBdev1 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.118 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 [ 00:11:08.380 { 00:11:08.380 "name": "BaseBdev1", 00:11:08.380 "aliases": [ 00:11:08.380 "2f991a77-a340-431c-9cdf-75a93459df32" 00:11:08.380 ], 00:11:08.380 "product_name": "Malloc disk", 00:11:08.380 "block_size": 512, 00:11:08.380 "num_blocks": 65536, 00:11:08.380 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:08.380 "assigned_rate_limits": { 00:11:08.380 "rw_ios_per_sec": 0, 00:11:08.380 "rw_mbytes_per_sec": 0, 00:11:08.380 "r_mbytes_per_sec": 0, 00:11:08.380 "w_mbytes_per_sec": 0 00:11:08.380 }, 00:11:08.380 "claimed": true, 00:11:08.380 "claim_type": "exclusive_write", 00:11:08.380 "zoned": false, 00:11:08.380 "supported_io_types": { 00:11:08.380 "read": true, 00:11:08.380 "write": true, 00:11:08.380 "unmap": true, 00:11:08.380 "flush": true, 00:11:08.380 "reset": true, 00:11:08.380 "nvme_admin": false, 00:11:08.380 "nvme_io": false, 00:11:08.380 "nvme_io_md": false, 00:11:08.380 "write_zeroes": true, 00:11:08.380 "zcopy": true, 00:11:08.380 "get_zone_info": false, 00:11:08.380 "zone_management": false, 00:11:08.380 "zone_append": false, 00:11:08.380 "compare": false, 00:11:08.380 "compare_and_write": false, 00:11:08.380 "abort": true, 00:11:08.380 "seek_hole": false, 00:11:08.380 "seek_data": false, 00:11:08.380 "copy": true, 00:11:08.380 "nvme_iov_md": false 00:11:08.380 }, 00:11:08.380 "memory_domains": [ 00:11:08.380 { 00:11:08.380 "dma_device_id": "system", 00:11:08.380 "dma_device_type": 1 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.380 "dma_device_type": 2 00:11:08.380 } 00:11:08.380 ], 00:11:08.380 "driver_specific": {} 00:11:08.380 } 00:11:08.380 ] 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.380 "name": "Existed_Raid", 00:11:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.380 "strip_size_kb": 0, 00:11:08.380 "state": "configuring", 00:11:08.380 "raid_level": "raid1", 00:11:08.380 "superblock": false, 00:11:08.380 "num_base_bdevs": 4, 00:11:08.380 "num_base_bdevs_discovered": 1, 00:11:08.380 "num_base_bdevs_operational": 4, 00:11:08.380 "base_bdevs_list": [ 00:11:08.380 { 00:11:08.380 "name": "BaseBdev1", 00:11:08.380 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:08.380 "is_configured": true, 00:11:08.380 "data_offset": 0, 00:11:08.380 "data_size": 65536 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev2", 00:11:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.380 "is_configured": false, 00:11:08.380 "data_offset": 0, 00:11:08.380 "data_size": 0 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev3", 00:11:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.380 "is_configured": false, 00:11:08.380 "data_offset": 0, 00:11:08.380 "data_size": 0 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev4", 00:11:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.380 "is_configured": false, 00:11:08.380 "data_offset": 0, 00:11:08.380 "data_size": 0 00:11:08.380 } 00:11:08.380 ] 00:11:08.380 }' 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.380 17:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.641 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.641 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.641 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.641 [2024-10-25 17:52:27.007600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.641 [2024-10-25 17:52:27.007723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:08.641 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.641 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.642 [2024-10-25 17:52:27.015629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.642 [2024-10-25 17:52:27.017705] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.642 [2024-10-25 17:52:27.017757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.642 [2024-10-25 17:52:27.017769] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.642 [2024-10-25 17:52:27.017782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.642 [2024-10-25 17:52:27.017791] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.642 [2024-10-25 17:52:27.017801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.642 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.902 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.902 "name": "Existed_Raid", 00:11:08.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.903 "strip_size_kb": 0, 00:11:08.903 "state": "configuring", 00:11:08.903 "raid_level": "raid1", 00:11:08.903 "superblock": false, 00:11:08.903 "num_base_bdevs": 4, 00:11:08.903 "num_base_bdevs_discovered": 1, 00:11:08.903 "num_base_bdevs_operational": 4, 00:11:08.903 "base_bdevs_list": [ 00:11:08.903 { 00:11:08.903 "name": "BaseBdev1", 00:11:08.903 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:08.903 "is_configured": true, 00:11:08.903 "data_offset": 0, 00:11:08.903 "data_size": 65536 00:11:08.903 }, 00:11:08.903 { 00:11:08.903 "name": "BaseBdev2", 00:11:08.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.903 "is_configured": false, 00:11:08.903 "data_offset": 0, 00:11:08.903 "data_size": 0 00:11:08.903 }, 00:11:08.903 { 00:11:08.903 "name": "BaseBdev3", 00:11:08.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.903 "is_configured": false, 00:11:08.903 "data_offset": 0, 00:11:08.903 "data_size": 0 00:11:08.903 }, 00:11:08.903 { 00:11:08.903 "name": "BaseBdev4", 00:11:08.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.903 "is_configured": false, 00:11:08.903 "data_offset": 0, 00:11:08.903 "data_size": 0 00:11:08.903 } 00:11:08.903 ] 00:11:08.903 }' 00:11:08.903 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.903 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.163 [2024-10-25 17:52:27.538895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.163 BaseBdev2 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.163 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.163 [ 00:11:09.163 { 00:11:09.163 "name": "BaseBdev2", 00:11:09.163 "aliases": [ 00:11:09.163 "0af59fb2-4859-4327-af6c-a01ce570dd0c" 00:11:09.163 ], 00:11:09.163 "product_name": "Malloc disk", 00:11:09.163 "block_size": 512, 00:11:09.163 "num_blocks": 65536, 00:11:09.163 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:09.163 "assigned_rate_limits": { 00:11:09.163 "rw_ios_per_sec": 0, 00:11:09.163 "rw_mbytes_per_sec": 0, 00:11:09.163 "r_mbytes_per_sec": 0, 00:11:09.163 "w_mbytes_per_sec": 0 00:11:09.163 }, 00:11:09.164 "claimed": true, 00:11:09.164 "claim_type": "exclusive_write", 00:11:09.164 "zoned": false, 00:11:09.164 "supported_io_types": { 00:11:09.164 "read": true, 00:11:09.164 "write": true, 00:11:09.164 "unmap": true, 00:11:09.164 "flush": true, 00:11:09.164 "reset": true, 00:11:09.164 "nvme_admin": false, 00:11:09.164 "nvme_io": false, 00:11:09.164 "nvme_io_md": false, 00:11:09.164 "write_zeroes": true, 00:11:09.164 "zcopy": true, 00:11:09.164 "get_zone_info": false, 00:11:09.164 "zone_management": false, 00:11:09.164 "zone_append": false, 00:11:09.164 "compare": false, 00:11:09.164 "compare_and_write": false, 00:11:09.164 "abort": true, 00:11:09.164 "seek_hole": false, 00:11:09.164 "seek_data": false, 00:11:09.164 "copy": true, 00:11:09.164 "nvme_iov_md": false 00:11:09.164 }, 00:11:09.164 "memory_domains": [ 00:11:09.164 { 00:11:09.164 "dma_device_id": "system", 00:11:09.164 "dma_device_type": 1 00:11:09.164 }, 00:11:09.164 { 00:11:09.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.164 "dma_device_type": 2 00:11:09.164 } 00:11:09.164 ], 00:11:09.164 "driver_specific": {} 00:11:09.164 } 00:11:09.164 ] 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.164 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.424 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.424 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.424 "name": "Existed_Raid", 00:11:09.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.424 "strip_size_kb": 0, 00:11:09.424 "state": "configuring", 00:11:09.424 "raid_level": "raid1", 00:11:09.424 "superblock": false, 00:11:09.424 "num_base_bdevs": 4, 00:11:09.424 "num_base_bdevs_discovered": 2, 00:11:09.424 "num_base_bdevs_operational": 4, 00:11:09.424 "base_bdevs_list": [ 00:11:09.424 { 00:11:09.424 "name": "BaseBdev1", 00:11:09.424 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:09.424 "is_configured": true, 00:11:09.424 "data_offset": 0, 00:11:09.424 "data_size": 65536 00:11:09.424 }, 00:11:09.424 { 00:11:09.424 "name": "BaseBdev2", 00:11:09.424 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:09.424 "is_configured": true, 00:11:09.424 "data_offset": 0, 00:11:09.424 "data_size": 65536 00:11:09.424 }, 00:11:09.424 { 00:11:09.424 "name": "BaseBdev3", 00:11:09.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.424 "is_configured": false, 00:11:09.424 "data_offset": 0, 00:11:09.424 "data_size": 0 00:11:09.424 }, 00:11:09.424 { 00:11:09.424 "name": "BaseBdev4", 00:11:09.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.424 "is_configured": false, 00:11:09.424 "data_offset": 0, 00:11:09.424 "data_size": 0 00:11:09.424 } 00:11:09.424 ] 00:11:09.424 }' 00:11:09.424 17:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.424 17:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.684 [2024-10-25 17:52:28.067486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.684 BaseBdev3 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.684 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.685 [ 00:11:09.685 { 00:11:09.685 "name": "BaseBdev3", 00:11:09.685 "aliases": [ 00:11:09.685 "d1982650-913d-4552-be56-e0ad9ed407e7" 00:11:09.685 ], 00:11:09.685 "product_name": "Malloc disk", 00:11:09.685 "block_size": 512, 00:11:09.685 "num_blocks": 65536, 00:11:09.685 "uuid": "d1982650-913d-4552-be56-e0ad9ed407e7", 00:11:09.685 "assigned_rate_limits": { 00:11:09.685 "rw_ios_per_sec": 0, 00:11:09.685 "rw_mbytes_per_sec": 0, 00:11:09.685 "r_mbytes_per_sec": 0, 00:11:09.685 "w_mbytes_per_sec": 0 00:11:09.685 }, 00:11:09.685 "claimed": true, 00:11:09.685 "claim_type": "exclusive_write", 00:11:09.685 "zoned": false, 00:11:09.685 "supported_io_types": { 00:11:09.685 "read": true, 00:11:09.685 "write": true, 00:11:09.685 "unmap": true, 00:11:09.685 "flush": true, 00:11:09.685 "reset": true, 00:11:09.685 "nvme_admin": false, 00:11:09.685 "nvme_io": false, 00:11:09.685 "nvme_io_md": false, 00:11:09.685 "write_zeroes": true, 00:11:09.685 "zcopy": true, 00:11:09.685 "get_zone_info": false, 00:11:09.685 "zone_management": false, 00:11:09.685 "zone_append": false, 00:11:09.685 "compare": false, 00:11:09.685 "compare_and_write": false, 00:11:09.685 "abort": true, 00:11:09.685 "seek_hole": false, 00:11:09.685 "seek_data": false, 00:11:09.685 "copy": true, 00:11:09.685 "nvme_iov_md": false 00:11:09.685 }, 00:11:09.685 "memory_domains": [ 00:11:09.685 { 00:11:09.685 "dma_device_id": "system", 00:11:09.685 "dma_device_type": 1 00:11:09.685 }, 00:11:09.685 { 00:11:09.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.685 "dma_device_type": 2 00:11:09.685 } 00:11:09.685 ], 00:11:09.685 "driver_specific": {} 00:11:09.685 } 00:11:09.685 ] 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.685 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.945 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.945 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.945 "name": "Existed_Raid", 00:11:09.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.945 "strip_size_kb": 0, 00:11:09.945 "state": "configuring", 00:11:09.945 "raid_level": "raid1", 00:11:09.945 "superblock": false, 00:11:09.945 "num_base_bdevs": 4, 00:11:09.945 "num_base_bdevs_discovered": 3, 00:11:09.945 "num_base_bdevs_operational": 4, 00:11:09.945 "base_bdevs_list": [ 00:11:09.945 { 00:11:09.945 "name": "BaseBdev1", 00:11:09.945 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:09.945 "is_configured": true, 00:11:09.945 "data_offset": 0, 00:11:09.945 "data_size": 65536 00:11:09.945 }, 00:11:09.945 { 00:11:09.945 "name": "BaseBdev2", 00:11:09.945 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:09.945 "is_configured": true, 00:11:09.945 "data_offset": 0, 00:11:09.945 "data_size": 65536 00:11:09.945 }, 00:11:09.945 { 00:11:09.945 "name": "BaseBdev3", 00:11:09.945 "uuid": "d1982650-913d-4552-be56-e0ad9ed407e7", 00:11:09.945 "is_configured": true, 00:11:09.945 "data_offset": 0, 00:11:09.945 "data_size": 65536 00:11:09.945 }, 00:11:09.945 { 00:11:09.945 "name": "BaseBdev4", 00:11:09.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.945 "is_configured": false, 00:11:09.945 "data_offset": 0, 00:11:09.945 "data_size": 0 00:11:09.945 } 00:11:09.945 ] 00:11:09.945 }' 00:11:09.945 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.945 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.205 [2024-10-25 17:52:28.618948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.205 [2024-10-25 17:52:28.619088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.205 [2024-10-25 17:52:28.619114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.205 [2024-10-25 17:52:28.619443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:10.205 [2024-10-25 17:52:28.619662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.205 [2024-10-25 17:52:28.619712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.205 [2024-10-25 17:52:28.620051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.205 BaseBdev4 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.205 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.465 [ 00:11:10.465 { 00:11:10.465 "name": "BaseBdev4", 00:11:10.465 "aliases": [ 00:11:10.465 "2a5af499-f054-4f13-bdef-ba09b42532d7" 00:11:10.465 ], 00:11:10.465 "product_name": "Malloc disk", 00:11:10.465 "block_size": 512, 00:11:10.465 "num_blocks": 65536, 00:11:10.465 "uuid": "2a5af499-f054-4f13-bdef-ba09b42532d7", 00:11:10.465 "assigned_rate_limits": { 00:11:10.465 "rw_ios_per_sec": 0, 00:11:10.465 "rw_mbytes_per_sec": 0, 00:11:10.465 "r_mbytes_per_sec": 0, 00:11:10.465 "w_mbytes_per_sec": 0 00:11:10.465 }, 00:11:10.465 "claimed": true, 00:11:10.465 "claim_type": "exclusive_write", 00:11:10.465 "zoned": false, 00:11:10.465 "supported_io_types": { 00:11:10.465 "read": true, 00:11:10.465 "write": true, 00:11:10.465 "unmap": true, 00:11:10.465 "flush": true, 00:11:10.465 "reset": true, 00:11:10.465 "nvme_admin": false, 00:11:10.465 "nvme_io": false, 00:11:10.465 "nvme_io_md": false, 00:11:10.465 "write_zeroes": true, 00:11:10.465 "zcopy": true, 00:11:10.465 "get_zone_info": false, 00:11:10.465 "zone_management": false, 00:11:10.465 "zone_append": false, 00:11:10.465 "compare": false, 00:11:10.465 "compare_and_write": false, 00:11:10.465 "abort": true, 00:11:10.465 "seek_hole": false, 00:11:10.465 "seek_data": false, 00:11:10.465 "copy": true, 00:11:10.465 "nvme_iov_md": false 00:11:10.465 }, 00:11:10.465 "memory_domains": [ 00:11:10.465 { 00:11:10.465 "dma_device_id": "system", 00:11:10.465 "dma_device_type": 1 00:11:10.465 }, 00:11:10.465 { 00:11:10.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.465 "dma_device_type": 2 00:11:10.465 } 00:11:10.465 ], 00:11:10.465 "driver_specific": {} 00:11:10.465 } 00:11:10.465 ] 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.465 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.466 "name": "Existed_Raid", 00:11:10.466 "uuid": "ed2baca9-9ad1-4ba5-b037-544b2f795e5f", 00:11:10.466 "strip_size_kb": 0, 00:11:10.466 "state": "online", 00:11:10.466 "raid_level": "raid1", 00:11:10.466 "superblock": false, 00:11:10.466 "num_base_bdevs": 4, 00:11:10.466 "num_base_bdevs_discovered": 4, 00:11:10.466 "num_base_bdevs_operational": 4, 00:11:10.466 "base_bdevs_list": [ 00:11:10.466 { 00:11:10.466 "name": "BaseBdev1", 00:11:10.466 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:10.466 "is_configured": true, 00:11:10.466 "data_offset": 0, 00:11:10.466 "data_size": 65536 00:11:10.466 }, 00:11:10.466 { 00:11:10.466 "name": "BaseBdev2", 00:11:10.466 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:10.466 "is_configured": true, 00:11:10.466 "data_offset": 0, 00:11:10.466 "data_size": 65536 00:11:10.466 }, 00:11:10.466 { 00:11:10.466 "name": "BaseBdev3", 00:11:10.466 "uuid": "d1982650-913d-4552-be56-e0ad9ed407e7", 00:11:10.466 "is_configured": true, 00:11:10.466 "data_offset": 0, 00:11:10.466 "data_size": 65536 00:11:10.466 }, 00:11:10.466 { 00:11:10.466 "name": "BaseBdev4", 00:11:10.466 "uuid": "2a5af499-f054-4f13-bdef-ba09b42532d7", 00:11:10.466 "is_configured": true, 00:11:10.466 "data_offset": 0, 00:11:10.466 "data_size": 65536 00:11:10.466 } 00:11:10.466 ] 00:11:10.466 }' 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.466 17:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.726 [2024-10-25 17:52:29.114645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.726 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.726 "name": "Existed_Raid", 00:11:10.726 "aliases": [ 00:11:10.726 "ed2baca9-9ad1-4ba5-b037-544b2f795e5f" 00:11:10.726 ], 00:11:10.726 "product_name": "Raid Volume", 00:11:10.726 "block_size": 512, 00:11:10.726 "num_blocks": 65536, 00:11:10.726 "uuid": "ed2baca9-9ad1-4ba5-b037-544b2f795e5f", 00:11:10.726 "assigned_rate_limits": { 00:11:10.726 "rw_ios_per_sec": 0, 00:11:10.726 "rw_mbytes_per_sec": 0, 00:11:10.726 "r_mbytes_per_sec": 0, 00:11:10.726 "w_mbytes_per_sec": 0 00:11:10.726 }, 00:11:10.726 "claimed": false, 00:11:10.726 "zoned": false, 00:11:10.726 "supported_io_types": { 00:11:10.726 "read": true, 00:11:10.726 "write": true, 00:11:10.726 "unmap": false, 00:11:10.726 "flush": false, 00:11:10.726 "reset": true, 00:11:10.726 "nvme_admin": false, 00:11:10.726 "nvme_io": false, 00:11:10.726 "nvme_io_md": false, 00:11:10.726 "write_zeroes": true, 00:11:10.726 "zcopy": false, 00:11:10.726 "get_zone_info": false, 00:11:10.726 "zone_management": false, 00:11:10.726 "zone_append": false, 00:11:10.726 "compare": false, 00:11:10.726 "compare_and_write": false, 00:11:10.726 "abort": false, 00:11:10.726 "seek_hole": false, 00:11:10.726 "seek_data": false, 00:11:10.726 "copy": false, 00:11:10.726 "nvme_iov_md": false 00:11:10.726 }, 00:11:10.726 "memory_domains": [ 00:11:10.726 { 00:11:10.726 "dma_device_id": "system", 00:11:10.726 "dma_device_type": 1 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.726 "dma_device_type": 2 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "system", 00:11:10.726 "dma_device_type": 1 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.726 "dma_device_type": 2 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "system", 00:11:10.726 "dma_device_type": 1 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.726 "dma_device_type": 2 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "system", 00:11:10.726 "dma_device_type": 1 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.726 "dma_device_type": 2 00:11:10.726 } 00:11:10.726 ], 00:11:10.726 "driver_specific": { 00:11:10.726 "raid": { 00:11:10.726 "uuid": "ed2baca9-9ad1-4ba5-b037-544b2f795e5f", 00:11:10.726 "strip_size_kb": 0, 00:11:10.726 "state": "online", 00:11:10.726 "raid_level": "raid1", 00:11:10.726 "superblock": false, 00:11:10.726 "num_base_bdevs": 4, 00:11:10.726 "num_base_bdevs_discovered": 4, 00:11:10.726 "num_base_bdevs_operational": 4, 00:11:10.726 "base_bdevs_list": [ 00:11:10.726 { 00:11:10.726 "name": "BaseBdev1", 00:11:10.726 "uuid": "2f991a77-a340-431c-9cdf-75a93459df32", 00:11:10.726 "is_configured": true, 00:11:10.726 "data_offset": 0, 00:11:10.726 "data_size": 65536 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "name": "BaseBdev2", 00:11:10.726 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:10.726 "is_configured": true, 00:11:10.727 "data_offset": 0, 00:11:10.727 "data_size": 65536 00:11:10.727 }, 00:11:10.727 { 00:11:10.727 "name": "BaseBdev3", 00:11:10.727 "uuid": "d1982650-913d-4552-be56-e0ad9ed407e7", 00:11:10.727 "is_configured": true, 00:11:10.727 "data_offset": 0, 00:11:10.727 "data_size": 65536 00:11:10.727 }, 00:11:10.727 { 00:11:10.727 "name": "BaseBdev4", 00:11:10.727 "uuid": "2a5af499-f054-4f13-bdef-ba09b42532d7", 00:11:10.727 "is_configured": true, 00:11:10.727 "data_offset": 0, 00:11:10.727 "data_size": 65536 00:11:10.727 } 00:11:10.727 ] 00:11:10.727 } 00:11:10.727 } 00:11:10.727 }' 00:11:10.727 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:10.986 BaseBdev2 00:11:10.986 BaseBdev3 00:11:10.986 BaseBdev4' 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.986 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.987 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.247 [2024-10-25 17:52:29.429717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.247 "name": "Existed_Raid", 00:11:11.247 "uuid": "ed2baca9-9ad1-4ba5-b037-544b2f795e5f", 00:11:11.247 "strip_size_kb": 0, 00:11:11.247 "state": "online", 00:11:11.247 "raid_level": "raid1", 00:11:11.247 "superblock": false, 00:11:11.247 "num_base_bdevs": 4, 00:11:11.247 "num_base_bdevs_discovered": 3, 00:11:11.247 "num_base_bdevs_operational": 3, 00:11:11.247 "base_bdevs_list": [ 00:11:11.247 { 00:11:11.247 "name": null, 00:11:11.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.247 "is_configured": false, 00:11:11.247 "data_offset": 0, 00:11:11.247 "data_size": 65536 00:11:11.247 }, 00:11:11.247 { 00:11:11.247 "name": "BaseBdev2", 00:11:11.247 "uuid": "0af59fb2-4859-4327-af6c-a01ce570dd0c", 00:11:11.247 "is_configured": true, 00:11:11.247 "data_offset": 0, 00:11:11.247 "data_size": 65536 00:11:11.247 }, 00:11:11.247 { 00:11:11.247 "name": "BaseBdev3", 00:11:11.247 "uuid": "d1982650-913d-4552-be56-e0ad9ed407e7", 00:11:11.247 "is_configured": true, 00:11:11.247 "data_offset": 0, 00:11:11.247 "data_size": 65536 00:11:11.247 }, 00:11:11.247 { 00:11:11.247 "name": "BaseBdev4", 00:11:11.247 "uuid": "2a5af499-f054-4f13-bdef-ba09b42532d7", 00:11:11.247 "is_configured": true, 00:11:11.247 "data_offset": 0, 00:11:11.247 "data_size": 65536 00:11:11.247 } 00:11:11.247 ] 00:11:11.247 }' 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.247 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.507 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.766 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.766 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.766 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.767 17:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:11.767 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.767 17:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.767 [2024-10-25 17:52:29.967054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.767 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.767 [2024-10-25 17:52:30.125672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.031 [2024-10-25 17:52:30.279942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:12.031 [2024-10-25 17:52:30.280095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.031 [2024-10-25 17:52:30.383857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.031 [2024-10-25 17:52:30.383982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.031 [2024-10-25 17:52:30.384036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.031 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 BaseBdev2 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 [ 00:11:12.299 { 00:11:12.299 "name": "BaseBdev2", 00:11:12.299 "aliases": [ 00:11:12.299 "d0eed0e1-3935-43f6-bd81-af189cafc942" 00:11:12.299 ], 00:11:12.299 "product_name": "Malloc disk", 00:11:12.299 "block_size": 512, 00:11:12.299 "num_blocks": 65536, 00:11:12.299 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:12.299 "assigned_rate_limits": { 00:11:12.299 "rw_ios_per_sec": 0, 00:11:12.299 "rw_mbytes_per_sec": 0, 00:11:12.299 "r_mbytes_per_sec": 0, 00:11:12.299 "w_mbytes_per_sec": 0 00:11:12.299 }, 00:11:12.299 "claimed": false, 00:11:12.299 "zoned": false, 00:11:12.299 "supported_io_types": { 00:11:12.299 "read": true, 00:11:12.299 "write": true, 00:11:12.299 "unmap": true, 00:11:12.299 "flush": true, 00:11:12.299 "reset": true, 00:11:12.299 "nvme_admin": false, 00:11:12.299 "nvme_io": false, 00:11:12.299 "nvme_io_md": false, 00:11:12.299 "write_zeroes": true, 00:11:12.299 "zcopy": true, 00:11:12.299 "get_zone_info": false, 00:11:12.299 "zone_management": false, 00:11:12.299 "zone_append": false, 00:11:12.299 "compare": false, 00:11:12.299 "compare_and_write": false, 00:11:12.299 "abort": true, 00:11:12.299 "seek_hole": false, 00:11:12.299 "seek_data": false, 00:11:12.299 "copy": true, 00:11:12.299 "nvme_iov_md": false 00:11:12.299 }, 00:11:12.299 "memory_domains": [ 00:11:12.299 { 00:11:12.299 "dma_device_id": "system", 00:11:12.299 "dma_device_type": 1 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.299 "dma_device_type": 2 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "driver_specific": {} 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 BaseBdev3 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 [ 00:11:12.299 { 00:11:12.299 "name": "BaseBdev3", 00:11:12.299 "aliases": [ 00:11:12.299 "203fb10b-fe01-4613-818b-4be951fd5db3" 00:11:12.299 ], 00:11:12.299 "product_name": "Malloc disk", 00:11:12.299 "block_size": 512, 00:11:12.299 "num_blocks": 65536, 00:11:12.299 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:12.299 "assigned_rate_limits": { 00:11:12.299 "rw_ios_per_sec": 0, 00:11:12.299 "rw_mbytes_per_sec": 0, 00:11:12.299 "r_mbytes_per_sec": 0, 00:11:12.299 "w_mbytes_per_sec": 0 00:11:12.299 }, 00:11:12.299 "claimed": false, 00:11:12.299 "zoned": false, 00:11:12.299 "supported_io_types": { 00:11:12.299 "read": true, 00:11:12.299 "write": true, 00:11:12.299 "unmap": true, 00:11:12.299 "flush": true, 00:11:12.299 "reset": true, 00:11:12.299 "nvme_admin": false, 00:11:12.299 "nvme_io": false, 00:11:12.299 "nvme_io_md": false, 00:11:12.299 "write_zeroes": true, 00:11:12.299 "zcopy": true, 00:11:12.299 "get_zone_info": false, 00:11:12.299 "zone_management": false, 00:11:12.299 "zone_append": false, 00:11:12.299 "compare": false, 00:11:12.299 "compare_and_write": false, 00:11:12.299 "abort": true, 00:11:12.299 "seek_hole": false, 00:11:12.299 "seek_data": false, 00:11:12.299 "copy": true, 00:11:12.299 "nvme_iov_md": false 00:11:12.299 }, 00:11:12.299 "memory_domains": [ 00:11:12.299 { 00:11:12.299 "dma_device_id": "system", 00:11:12.299 "dma_device_type": 1 00:11:12.299 }, 00:11:12.299 { 00:11:12.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.299 "dma_device_type": 2 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "driver_specific": {} 00:11:12.299 } 00:11:12.299 ] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.299 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 BaseBdev4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 [ 00:11:12.300 { 00:11:12.300 "name": "BaseBdev4", 00:11:12.300 "aliases": [ 00:11:12.300 "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43" 00:11:12.300 ], 00:11:12.300 "product_name": "Malloc disk", 00:11:12.300 "block_size": 512, 00:11:12.300 "num_blocks": 65536, 00:11:12.300 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:12.300 "assigned_rate_limits": { 00:11:12.300 "rw_ios_per_sec": 0, 00:11:12.300 "rw_mbytes_per_sec": 0, 00:11:12.300 "r_mbytes_per_sec": 0, 00:11:12.300 "w_mbytes_per_sec": 0 00:11:12.300 }, 00:11:12.300 "claimed": false, 00:11:12.300 "zoned": false, 00:11:12.300 "supported_io_types": { 00:11:12.300 "read": true, 00:11:12.300 "write": true, 00:11:12.300 "unmap": true, 00:11:12.300 "flush": true, 00:11:12.300 "reset": true, 00:11:12.300 "nvme_admin": false, 00:11:12.300 "nvme_io": false, 00:11:12.300 "nvme_io_md": false, 00:11:12.300 "write_zeroes": true, 00:11:12.300 "zcopy": true, 00:11:12.300 "get_zone_info": false, 00:11:12.300 "zone_management": false, 00:11:12.300 "zone_append": false, 00:11:12.300 "compare": false, 00:11:12.300 "compare_and_write": false, 00:11:12.300 "abort": true, 00:11:12.300 "seek_hole": false, 00:11:12.300 "seek_data": false, 00:11:12.300 "copy": true, 00:11:12.300 "nvme_iov_md": false 00:11:12.300 }, 00:11:12.300 "memory_domains": [ 00:11:12.300 { 00:11:12.300 "dma_device_id": "system", 00:11:12.300 "dma_device_type": 1 00:11:12.300 }, 00:11:12.300 { 00:11:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.300 "dma_device_type": 2 00:11:12.300 } 00:11:12.300 ], 00:11:12.300 "driver_specific": {} 00:11:12.300 } 00:11:12.300 ] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 [2024-10-25 17:52:30.684842] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.300 [2024-10-25 17:52:30.684884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.300 [2024-10-25 17:52:30.684904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.300 [2024-10-25 17:52:30.686773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.300 [2024-10-25 17:52:30.686824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.300 "name": "Existed_Raid", 00:11:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.300 "strip_size_kb": 0, 00:11:12.300 "state": "configuring", 00:11:12.300 "raid_level": "raid1", 00:11:12.300 "superblock": false, 00:11:12.300 "num_base_bdevs": 4, 00:11:12.300 "num_base_bdevs_discovered": 3, 00:11:12.300 "num_base_bdevs_operational": 4, 00:11:12.300 "base_bdevs_list": [ 00:11:12.300 { 00:11:12.300 "name": "BaseBdev1", 00:11:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.300 "is_configured": false, 00:11:12.300 "data_offset": 0, 00:11:12.300 "data_size": 0 00:11:12.300 }, 00:11:12.300 { 00:11:12.300 "name": "BaseBdev2", 00:11:12.300 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:12.300 "is_configured": true, 00:11:12.300 "data_offset": 0, 00:11:12.300 "data_size": 65536 00:11:12.300 }, 00:11:12.300 { 00:11:12.300 "name": "BaseBdev3", 00:11:12.300 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:12.300 "is_configured": true, 00:11:12.300 "data_offset": 0, 00:11:12.300 "data_size": 65536 00:11:12.300 }, 00:11:12.300 { 00:11:12.300 "name": "BaseBdev4", 00:11:12.300 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:12.300 "is_configured": true, 00:11:12.300 "data_offset": 0, 00:11:12.300 "data_size": 65536 00:11:12.300 } 00:11:12.300 ] 00:11:12.300 }' 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.300 17:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.868 [2024-10-25 17:52:31.136178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.868 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.869 "name": "Existed_Raid", 00:11:12.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.869 "strip_size_kb": 0, 00:11:12.869 "state": "configuring", 00:11:12.869 "raid_level": "raid1", 00:11:12.869 "superblock": false, 00:11:12.869 "num_base_bdevs": 4, 00:11:12.869 "num_base_bdevs_discovered": 2, 00:11:12.869 "num_base_bdevs_operational": 4, 00:11:12.869 "base_bdevs_list": [ 00:11:12.869 { 00:11:12.869 "name": "BaseBdev1", 00:11:12.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.869 "is_configured": false, 00:11:12.869 "data_offset": 0, 00:11:12.869 "data_size": 0 00:11:12.869 }, 00:11:12.869 { 00:11:12.869 "name": null, 00:11:12.869 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:12.869 "is_configured": false, 00:11:12.869 "data_offset": 0, 00:11:12.869 "data_size": 65536 00:11:12.869 }, 00:11:12.869 { 00:11:12.869 "name": "BaseBdev3", 00:11:12.869 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:12.869 "is_configured": true, 00:11:12.869 "data_offset": 0, 00:11:12.869 "data_size": 65536 00:11:12.869 }, 00:11:12.869 { 00:11:12.869 "name": "BaseBdev4", 00:11:12.869 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:12.869 "is_configured": true, 00:11:12.869 "data_offset": 0, 00:11:12.869 "data_size": 65536 00:11:12.869 } 00:11:12.869 ] 00:11:12.869 }' 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.869 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.437 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.437 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.438 [2024-10-25 17:52:31.678808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.438 BaseBdev1 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.438 [ 00:11:13.438 { 00:11:13.438 "name": "BaseBdev1", 00:11:13.438 "aliases": [ 00:11:13.438 "59504626-ca9d-49cc-9ded-8fb31959f2ec" 00:11:13.438 ], 00:11:13.438 "product_name": "Malloc disk", 00:11:13.438 "block_size": 512, 00:11:13.438 "num_blocks": 65536, 00:11:13.438 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:13.438 "assigned_rate_limits": { 00:11:13.438 "rw_ios_per_sec": 0, 00:11:13.438 "rw_mbytes_per_sec": 0, 00:11:13.438 "r_mbytes_per_sec": 0, 00:11:13.438 "w_mbytes_per_sec": 0 00:11:13.438 }, 00:11:13.438 "claimed": true, 00:11:13.438 "claim_type": "exclusive_write", 00:11:13.438 "zoned": false, 00:11:13.438 "supported_io_types": { 00:11:13.438 "read": true, 00:11:13.438 "write": true, 00:11:13.438 "unmap": true, 00:11:13.438 "flush": true, 00:11:13.438 "reset": true, 00:11:13.438 "nvme_admin": false, 00:11:13.438 "nvme_io": false, 00:11:13.438 "nvme_io_md": false, 00:11:13.438 "write_zeroes": true, 00:11:13.438 "zcopy": true, 00:11:13.438 "get_zone_info": false, 00:11:13.438 "zone_management": false, 00:11:13.438 "zone_append": false, 00:11:13.438 "compare": false, 00:11:13.438 "compare_and_write": false, 00:11:13.438 "abort": true, 00:11:13.438 "seek_hole": false, 00:11:13.438 "seek_data": false, 00:11:13.438 "copy": true, 00:11:13.438 "nvme_iov_md": false 00:11:13.438 }, 00:11:13.438 "memory_domains": [ 00:11:13.438 { 00:11:13.438 "dma_device_id": "system", 00:11:13.438 "dma_device_type": 1 00:11:13.438 }, 00:11:13.438 { 00:11:13.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.438 "dma_device_type": 2 00:11:13.438 } 00:11:13.438 ], 00:11:13.438 "driver_specific": {} 00:11:13.438 } 00:11:13.438 ] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.438 "name": "Existed_Raid", 00:11:13.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.438 "strip_size_kb": 0, 00:11:13.438 "state": "configuring", 00:11:13.438 "raid_level": "raid1", 00:11:13.438 "superblock": false, 00:11:13.438 "num_base_bdevs": 4, 00:11:13.438 "num_base_bdevs_discovered": 3, 00:11:13.438 "num_base_bdevs_operational": 4, 00:11:13.438 "base_bdevs_list": [ 00:11:13.438 { 00:11:13.438 "name": "BaseBdev1", 00:11:13.438 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:13.438 "is_configured": true, 00:11:13.438 "data_offset": 0, 00:11:13.438 "data_size": 65536 00:11:13.438 }, 00:11:13.438 { 00:11:13.438 "name": null, 00:11:13.438 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:13.438 "is_configured": false, 00:11:13.438 "data_offset": 0, 00:11:13.438 "data_size": 65536 00:11:13.438 }, 00:11:13.438 { 00:11:13.438 "name": "BaseBdev3", 00:11:13.438 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:13.438 "is_configured": true, 00:11:13.438 "data_offset": 0, 00:11:13.438 "data_size": 65536 00:11:13.438 }, 00:11:13.438 { 00:11:13.438 "name": "BaseBdev4", 00:11:13.438 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:13.438 "is_configured": true, 00:11:13.438 "data_offset": 0, 00:11:13.438 "data_size": 65536 00:11:13.438 } 00:11:13.438 ] 00:11:13.438 }' 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.438 17:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 [2024-10-25 17:52:32.241947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.008 "name": "Existed_Raid", 00:11:14.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.008 "strip_size_kb": 0, 00:11:14.008 "state": "configuring", 00:11:14.008 "raid_level": "raid1", 00:11:14.008 "superblock": false, 00:11:14.008 "num_base_bdevs": 4, 00:11:14.008 "num_base_bdevs_discovered": 2, 00:11:14.008 "num_base_bdevs_operational": 4, 00:11:14.008 "base_bdevs_list": [ 00:11:14.008 { 00:11:14.008 "name": "BaseBdev1", 00:11:14.008 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:14.008 "is_configured": true, 00:11:14.008 "data_offset": 0, 00:11:14.008 "data_size": 65536 00:11:14.008 }, 00:11:14.008 { 00:11:14.008 "name": null, 00:11:14.008 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:14.008 "is_configured": false, 00:11:14.008 "data_offset": 0, 00:11:14.008 "data_size": 65536 00:11:14.008 }, 00:11:14.008 { 00:11:14.008 "name": null, 00:11:14.008 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:14.008 "is_configured": false, 00:11:14.008 "data_offset": 0, 00:11:14.008 "data_size": 65536 00:11:14.008 }, 00:11:14.008 { 00:11:14.008 "name": "BaseBdev4", 00:11:14.008 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:14.008 "is_configured": true, 00:11:14.008 "data_offset": 0, 00:11:14.008 "data_size": 65536 00:11:14.008 } 00:11:14.008 ] 00:11:14.008 }' 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.008 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.268 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.268 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.268 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.268 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.528 [2024-10-25 17:52:32.741109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.528 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.528 "name": "Existed_Raid", 00:11:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.528 "strip_size_kb": 0, 00:11:14.528 "state": "configuring", 00:11:14.528 "raid_level": "raid1", 00:11:14.528 "superblock": false, 00:11:14.528 "num_base_bdevs": 4, 00:11:14.528 "num_base_bdevs_discovered": 3, 00:11:14.528 "num_base_bdevs_operational": 4, 00:11:14.528 "base_bdevs_list": [ 00:11:14.528 { 00:11:14.528 "name": "BaseBdev1", 00:11:14.528 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:14.528 "is_configured": true, 00:11:14.528 "data_offset": 0, 00:11:14.528 "data_size": 65536 00:11:14.528 }, 00:11:14.528 { 00:11:14.528 "name": null, 00:11:14.528 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:14.528 "is_configured": false, 00:11:14.528 "data_offset": 0, 00:11:14.528 "data_size": 65536 00:11:14.529 }, 00:11:14.529 { 00:11:14.529 "name": "BaseBdev3", 00:11:14.529 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:14.529 "is_configured": true, 00:11:14.529 "data_offset": 0, 00:11:14.529 "data_size": 65536 00:11:14.529 }, 00:11:14.529 { 00:11:14.529 "name": "BaseBdev4", 00:11:14.529 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:14.529 "is_configured": true, 00:11:14.529 "data_offset": 0, 00:11:14.529 "data_size": 65536 00:11:14.529 } 00:11:14.529 ] 00:11:14.529 }' 00:11:14.529 17:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.529 17:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.788 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.788 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.788 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.788 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.048 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.048 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:15.048 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.048 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.049 [2024-10-25 17:52:33.260314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.049 "name": "Existed_Raid", 00:11:15.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.049 "strip_size_kb": 0, 00:11:15.049 "state": "configuring", 00:11:15.049 "raid_level": "raid1", 00:11:15.049 "superblock": false, 00:11:15.049 "num_base_bdevs": 4, 00:11:15.049 "num_base_bdevs_discovered": 2, 00:11:15.049 "num_base_bdevs_operational": 4, 00:11:15.049 "base_bdevs_list": [ 00:11:15.049 { 00:11:15.049 "name": null, 00:11:15.049 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:15.049 "is_configured": false, 00:11:15.049 "data_offset": 0, 00:11:15.049 "data_size": 65536 00:11:15.049 }, 00:11:15.049 { 00:11:15.049 "name": null, 00:11:15.049 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:15.049 "is_configured": false, 00:11:15.049 "data_offset": 0, 00:11:15.049 "data_size": 65536 00:11:15.049 }, 00:11:15.049 { 00:11:15.049 "name": "BaseBdev3", 00:11:15.049 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:15.049 "is_configured": true, 00:11:15.049 "data_offset": 0, 00:11:15.049 "data_size": 65536 00:11:15.049 }, 00:11:15.049 { 00:11:15.049 "name": "BaseBdev4", 00:11:15.049 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:15.049 "is_configured": true, 00:11:15.049 "data_offset": 0, 00:11:15.049 "data_size": 65536 00:11:15.049 } 00:11:15.049 ] 00:11:15.049 }' 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.049 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 [2024-10-25 17:52:33.865474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.619 "name": "Existed_Raid", 00:11:15.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.619 "strip_size_kb": 0, 00:11:15.619 "state": "configuring", 00:11:15.619 "raid_level": "raid1", 00:11:15.619 "superblock": false, 00:11:15.619 "num_base_bdevs": 4, 00:11:15.619 "num_base_bdevs_discovered": 3, 00:11:15.619 "num_base_bdevs_operational": 4, 00:11:15.619 "base_bdevs_list": [ 00:11:15.619 { 00:11:15.619 "name": null, 00:11:15.619 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:15.619 "is_configured": false, 00:11:15.619 "data_offset": 0, 00:11:15.619 "data_size": 65536 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "BaseBdev2", 00:11:15.619 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 0, 00:11:15.619 "data_size": 65536 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "BaseBdev3", 00:11:15.619 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 0, 00:11:15.619 "data_size": 65536 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "BaseBdev4", 00:11:15.619 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 0, 00:11:15.619 "data_size": 65536 00:11:15.619 } 00:11:15.619 ] 00:11:15.619 }' 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.619 17:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.878 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.878 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.878 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.878 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.138 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.138 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:16.138 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59504626-ca9d-49cc-9ded-8fb31959f2ec 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.139 [2024-10-25 17:52:34.429079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:16.139 [2024-10-25 17:52:34.429140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:16.139 [2024-10-25 17:52:34.429153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:16.139 [2024-10-25 17:52:34.429504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:16.139 [2024-10-25 17:52:34.429713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:16.139 [2024-10-25 17:52:34.429731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:16.139 [2024-10-25 17:52:34.430045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.139 NewBaseBdev 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.139 [ 00:11:16.139 { 00:11:16.139 "name": "NewBaseBdev", 00:11:16.139 "aliases": [ 00:11:16.139 "59504626-ca9d-49cc-9ded-8fb31959f2ec" 00:11:16.139 ], 00:11:16.139 "product_name": "Malloc disk", 00:11:16.139 "block_size": 512, 00:11:16.139 "num_blocks": 65536, 00:11:16.139 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:16.139 "assigned_rate_limits": { 00:11:16.139 "rw_ios_per_sec": 0, 00:11:16.139 "rw_mbytes_per_sec": 0, 00:11:16.139 "r_mbytes_per_sec": 0, 00:11:16.139 "w_mbytes_per_sec": 0 00:11:16.139 }, 00:11:16.139 "claimed": true, 00:11:16.139 "claim_type": "exclusive_write", 00:11:16.139 "zoned": false, 00:11:16.139 "supported_io_types": { 00:11:16.139 "read": true, 00:11:16.139 "write": true, 00:11:16.139 "unmap": true, 00:11:16.139 "flush": true, 00:11:16.139 "reset": true, 00:11:16.139 "nvme_admin": false, 00:11:16.139 "nvme_io": false, 00:11:16.139 "nvme_io_md": false, 00:11:16.139 "write_zeroes": true, 00:11:16.139 "zcopy": true, 00:11:16.139 "get_zone_info": false, 00:11:16.139 "zone_management": false, 00:11:16.139 "zone_append": false, 00:11:16.139 "compare": false, 00:11:16.139 "compare_and_write": false, 00:11:16.139 "abort": true, 00:11:16.139 "seek_hole": false, 00:11:16.139 "seek_data": false, 00:11:16.139 "copy": true, 00:11:16.139 "nvme_iov_md": false 00:11:16.139 }, 00:11:16.139 "memory_domains": [ 00:11:16.139 { 00:11:16.139 "dma_device_id": "system", 00:11:16.139 "dma_device_type": 1 00:11:16.139 }, 00:11:16.139 { 00:11:16.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.139 "dma_device_type": 2 00:11:16.139 } 00:11:16.139 ], 00:11:16.139 "driver_specific": {} 00:11:16.139 } 00:11:16.139 ] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.139 "name": "Existed_Raid", 00:11:16.139 "uuid": "ffc49e7c-27bd-4769-82be-2de0f1d01834", 00:11:16.139 "strip_size_kb": 0, 00:11:16.139 "state": "online", 00:11:16.139 "raid_level": "raid1", 00:11:16.139 "superblock": false, 00:11:16.139 "num_base_bdevs": 4, 00:11:16.139 "num_base_bdevs_discovered": 4, 00:11:16.139 "num_base_bdevs_operational": 4, 00:11:16.139 "base_bdevs_list": [ 00:11:16.139 { 00:11:16.139 "name": "NewBaseBdev", 00:11:16.139 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:16.139 "is_configured": true, 00:11:16.139 "data_offset": 0, 00:11:16.139 "data_size": 65536 00:11:16.139 }, 00:11:16.139 { 00:11:16.139 "name": "BaseBdev2", 00:11:16.139 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:16.139 "is_configured": true, 00:11:16.139 "data_offset": 0, 00:11:16.139 "data_size": 65536 00:11:16.139 }, 00:11:16.139 { 00:11:16.139 "name": "BaseBdev3", 00:11:16.139 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:16.139 "is_configured": true, 00:11:16.139 "data_offset": 0, 00:11:16.139 "data_size": 65536 00:11:16.139 }, 00:11:16.139 { 00:11:16.139 "name": "BaseBdev4", 00:11:16.139 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:16.139 "is_configured": true, 00:11:16.139 "data_offset": 0, 00:11:16.139 "data_size": 65536 00:11:16.139 } 00:11:16.139 ] 00:11:16.139 }' 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.139 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.708 [2024-10-25 17:52:34.924753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.708 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.708 "name": "Existed_Raid", 00:11:16.708 "aliases": [ 00:11:16.708 "ffc49e7c-27bd-4769-82be-2de0f1d01834" 00:11:16.708 ], 00:11:16.708 "product_name": "Raid Volume", 00:11:16.708 "block_size": 512, 00:11:16.708 "num_blocks": 65536, 00:11:16.708 "uuid": "ffc49e7c-27bd-4769-82be-2de0f1d01834", 00:11:16.708 "assigned_rate_limits": { 00:11:16.708 "rw_ios_per_sec": 0, 00:11:16.708 "rw_mbytes_per_sec": 0, 00:11:16.708 "r_mbytes_per_sec": 0, 00:11:16.708 "w_mbytes_per_sec": 0 00:11:16.708 }, 00:11:16.708 "claimed": false, 00:11:16.708 "zoned": false, 00:11:16.708 "supported_io_types": { 00:11:16.708 "read": true, 00:11:16.708 "write": true, 00:11:16.708 "unmap": false, 00:11:16.708 "flush": false, 00:11:16.708 "reset": true, 00:11:16.708 "nvme_admin": false, 00:11:16.708 "nvme_io": false, 00:11:16.708 "nvme_io_md": false, 00:11:16.708 "write_zeroes": true, 00:11:16.708 "zcopy": false, 00:11:16.708 "get_zone_info": false, 00:11:16.708 "zone_management": false, 00:11:16.708 "zone_append": false, 00:11:16.708 "compare": false, 00:11:16.708 "compare_and_write": false, 00:11:16.708 "abort": false, 00:11:16.708 "seek_hole": false, 00:11:16.708 "seek_data": false, 00:11:16.708 "copy": false, 00:11:16.708 "nvme_iov_md": false 00:11:16.708 }, 00:11:16.708 "memory_domains": [ 00:11:16.708 { 00:11:16.708 "dma_device_id": "system", 00:11:16.708 "dma_device_type": 1 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.708 "dma_device_type": 2 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "system", 00:11:16.708 "dma_device_type": 1 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.708 "dma_device_type": 2 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "system", 00:11:16.708 "dma_device_type": 1 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.708 "dma_device_type": 2 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "system", 00:11:16.708 "dma_device_type": 1 00:11:16.708 }, 00:11:16.708 { 00:11:16.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.709 "dma_device_type": 2 00:11:16.709 } 00:11:16.709 ], 00:11:16.709 "driver_specific": { 00:11:16.709 "raid": { 00:11:16.709 "uuid": "ffc49e7c-27bd-4769-82be-2de0f1d01834", 00:11:16.709 "strip_size_kb": 0, 00:11:16.709 "state": "online", 00:11:16.709 "raid_level": "raid1", 00:11:16.709 "superblock": false, 00:11:16.709 "num_base_bdevs": 4, 00:11:16.709 "num_base_bdevs_discovered": 4, 00:11:16.709 "num_base_bdevs_operational": 4, 00:11:16.709 "base_bdevs_list": [ 00:11:16.709 { 00:11:16.709 "name": "NewBaseBdev", 00:11:16.709 "uuid": "59504626-ca9d-49cc-9ded-8fb31959f2ec", 00:11:16.709 "is_configured": true, 00:11:16.709 "data_offset": 0, 00:11:16.709 "data_size": 65536 00:11:16.709 }, 00:11:16.709 { 00:11:16.709 "name": "BaseBdev2", 00:11:16.709 "uuid": "d0eed0e1-3935-43f6-bd81-af189cafc942", 00:11:16.709 "is_configured": true, 00:11:16.709 "data_offset": 0, 00:11:16.709 "data_size": 65536 00:11:16.709 }, 00:11:16.709 { 00:11:16.709 "name": "BaseBdev3", 00:11:16.709 "uuid": "203fb10b-fe01-4613-818b-4be951fd5db3", 00:11:16.709 "is_configured": true, 00:11:16.709 "data_offset": 0, 00:11:16.709 "data_size": 65536 00:11:16.709 }, 00:11:16.709 { 00:11:16.709 "name": "BaseBdev4", 00:11:16.709 "uuid": "7aecf9bc-da70-4c26-8aef-a2b5ac5e0e43", 00:11:16.709 "is_configured": true, 00:11:16.709 "data_offset": 0, 00:11:16.709 "data_size": 65536 00:11:16.709 } 00:11:16.709 ] 00:11:16.709 } 00:11:16.709 } 00:11:16.709 }' 00:11:16.709 17:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:16.709 BaseBdev2 00:11:16.709 BaseBdev3 00:11:16.709 BaseBdev4' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.709 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.002 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.003 [2024-10-25 17:52:35.239839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.003 [2024-10-25 17:52:35.239871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.003 [2024-10-25 17:52:35.239965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.003 [2024-10-25 17:52:35.240324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.003 [2024-10-25 17:52:35.240353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72912 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72912 ']' 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72912 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72912 00:11:17.003 killing process with pid 72912 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72912' 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72912 00:11:17.003 [2024-10-25 17:52:35.288535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.003 17:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72912 00:11:17.285 [2024-10-25 17:52:35.687296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:18.666 00:11:18.666 real 0m11.841s 00:11:18.666 user 0m18.811s 00:11:18.666 sys 0m2.155s 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.666 ************************************ 00:11:18.666 END TEST raid_state_function_test 00:11:18.666 ************************************ 00:11:18.666 17:52:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:18.666 17:52:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:18.666 17:52:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.666 17:52:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.666 ************************************ 00:11:18.666 START TEST raid_state_function_test_sb 00:11:18.666 ************************************ 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:18.666 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73583 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:18.667 Process raid pid: 73583 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73583' 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73583 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73583 ']' 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.667 17:52:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.667 [2024-10-25 17:52:36.993631] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:18.667 [2024-10-25 17:52:36.993758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.926 [2024-10-25 17:52:37.174099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.926 [2024-10-25 17:52:37.279999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.186 [2024-10-25 17:52:37.477318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.186 [2024-10-25 17:52:37.477363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.446 [2024-10-25 17:52:37.812341] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.446 [2024-10-25 17:52:37.812402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.446 [2024-10-25 17:52:37.812418] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.446 [2024-10-25 17:52:37.812430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.446 [2024-10-25 17:52:37.812437] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.446 [2024-10-25 17:52:37.812447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.446 [2024-10-25 17:52:37.812462] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.446 [2024-10-25 17:52:37.812476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.446 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.447 "name": "Existed_Raid", 00:11:19.447 "uuid": "4c9fb0d4-3558-4600-9c71-83b49208f126", 00:11:19.447 "strip_size_kb": 0, 00:11:19.447 "state": "configuring", 00:11:19.447 "raid_level": "raid1", 00:11:19.447 "superblock": true, 00:11:19.447 "num_base_bdevs": 4, 00:11:19.447 "num_base_bdevs_discovered": 0, 00:11:19.447 "num_base_bdevs_operational": 4, 00:11:19.447 "base_bdevs_list": [ 00:11:19.447 { 00:11:19.447 "name": "BaseBdev1", 00:11:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.447 "is_configured": false, 00:11:19.447 "data_offset": 0, 00:11:19.447 "data_size": 0 00:11:19.447 }, 00:11:19.447 { 00:11:19.447 "name": "BaseBdev2", 00:11:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.447 "is_configured": false, 00:11:19.447 "data_offset": 0, 00:11:19.447 "data_size": 0 00:11:19.447 }, 00:11:19.447 { 00:11:19.447 "name": "BaseBdev3", 00:11:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.447 "is_configured": false, 00:11:19.447 "data_offset": 0, 00:11:19.447 "data_size": 0 00:11:19.447 }, 00:11:19.447 { 00:11:19.447 "name": "BaseBdev4", 00:11:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.447 "is_configured": false, 00:11:19.447 "data_offset": 0, 00:11:19.447 "data_size": 0 00:11:19.447 } 00:11:19.447 ] 00:11:19.447 }' 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.447 17:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 [2024-10-25 17:52:38.279606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.018 [2024-10-25 17:52:38.279653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 [2024-10-25 17:52:38.291569] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.018 [2024-10-25 17:52:38.291614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.018 [2024-10-25 17:52:38.291623] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.018 [2024-10-25 17:52:38.291632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.018 [2024-10-25 17:52:38.291638] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.018 [2024-10-25 17:52:38.291648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.018 [2024-10-25 17:52:38.291654] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.018 [2024-10-25 17:52:38.291662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 [2024-10-25 17:52:38.339962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.018 BaseBdev1 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 [ 00:11:20.018 { 00:11:20.018 "name": "BaseBdev1", 00:11:20.018 "aliases": [ 00:11:20.018 "d25cd10d-c695-4d32-bbf9-acfabcd40625" 00:11:20.018 ], 00:11:20.018 "product_name": "Malloc disk", 00:11:20.018 "block_size": 512, 00:11:20.018 "num_blocks": 65536, 00:11:20.018 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:20.018 "assigned_rate_limits": { 00:11:20.018 "rw_ios_per_sec": 0, 00:11:20.018 "rw_mbytes_per_sec": 0, 00:11:20.018 "r_mbytes_per_sec": 0, 00:11:20.018 "w_mbytes_per_sec": 0 00:11:20.018 }, 00:11:20.018 "claimed": true, 00:11:20.018 "claim_type": "exclusive_write", 00:11:20.018 "zoned": false, 00:11:20.018 "supported_io_types": { 00:11:20.018 "read": true, 00:11:20.018 "write": true, 00:11:20.018 "unmap": true, 00:11:20.018 "flush": true, 00:11:20.018 "reset": true, 00:11:20.018 "nvme_admin": false, 00:11:20.018 "nvme_io": false, 00:11:20.018 "nvme_io_md": false, 00:11:20.018 "write_zeroes": true, 00:11:20.018 "zcopy": true, 00:11:20.018 "get_zone_info": false, 00:11:20.018 "zone_management": false, 00:11:20.018 "zone_append": false, 00:11:20.018 "compare": false, 00:11:20.018 "compare_and_write": false, 00:11:20.018 "abort": true, 00:11:20.018 "seek_hole": false, 00:11:20.018 "seek_data": false, 00:11:20.018 "copy": true, 00:11:20.018 "nvme_iov_md": false 00:11:20.018 }, 00:11:20.018 "memory_domains": [ 00:11:20.018 { 00:11:20.018 "dma_device_id": "system", 00:11:20.018 "dma_device_type": 1 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.018 "dma_device_type": 2 00:11:20.018 } 00:11:20.018 ], 00:11:20.018 "driver_specific": {} 00:11:20.018 } 00:11:20.018 ] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.018 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.018 "name": "Existed_Raid", 00:11:20.018 "uuid": "b2e2c128-8fa8-4ca8-8d68-997d7fbc7f97", 00:11:20.018 "strip_size_kb": 0, 00:11:20.018 "state": "configuring", 00:11:20.018 "raid_level": "raid1", 00:11:20.018 "superblock": true, 00:11:20.018 "num_base_bdevs": 4, 00:11:20.018 "num_base_bdevs_discovered": 1, 00:11:20.018 "num_base_bdevs_operational": 4, 00:11:20.018 "base_bdevs_list": [ 00:11:20.018 { 00:11:20.018 "name": "BaseBdev1", 00:11:20.018 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:20.018 "is_configured": true, 00:11:20.018 "data_offset": 2048, 00:11:20.018 "data_size": 63488 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "name": "BaseBdev2", 00:11:20.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.018 "is_configured": false, 00:11:20.019 "data_offset": 0, 00:11:20.019 "data_size": 0 00:11:20.019 }, 00:11:20.019 { 00:11:20.019 "name": "BaseBdev3", 00:11:20.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.019 "is_configured": false, 00:11:20.019 "data_offset": 0, 00:11:20.019 "data_size": 0 00:11:20.019 }, 00:11:20.019 { 00:11:20.019 "name": "BaseBdev4", 00:11:20.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.019 "is_configured": false, 00:11:20.019 "data_offset": 0, 00:11:20.019 "data_size": 0 00:11:20.019 } 00:11:20.019 ] 00:11:20.019 }' 00:11:20.019 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.019 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 [2024-10-25 17:52:38.839152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.588 [2024-10-25 17:52:38.839218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 [2024-10-25 17:52:38.851185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.588 [2024-10-25 17:52:38.853297] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.588 [2024-10-25 17:52:38.853380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.588 [2024-10-25 17:52:38.853411] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.588 [2024-10-25 17:52:38.853436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.588 [2024-10-25 17:52:38.853456] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.588 [2024-10-25 17:52:38.853477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.588 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.588 "name": "Existed_Raid", 00:11:20.588 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:20.588 "strip_size_kb": 0, 00:11:20.589 "state": "configuring", 00:11:20.589 "raid_level": "raid1", 00:11:20.589 "superblock": true, 00:11:20.589 "num_base_bdevs": 4, 00:11:20.589 "num_base_bdevs_discovered": 1, 00:11:20.589 "num_base_bdevs_operational": 4, 00:11:20.589 "base_bdevs_list": [ 00:11:20.589 { 00:11:20.589 "name": "BaseBdev1", 00:11:20.589 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:20.589 "is_configured": true, 00:11:20.589 "data_offset": 2048, 00:11:20.589 "data_size": 63488 00:11:20.589 }, 00:11:20.589 { 00:11:20.589 "name": "BaseBdev2", 00:11:20.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.589 "is_configured": false, 00:11:20.589 "data_offset": 0, 00:11:20.589 "data_size": 0 00:11:20.589 }, 00:11:20.589 { 00:11:20.589 "name": "BaseBdev3", 00:11:20.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.589 "is_configured": false, 00:11:20.589 "data_offset": 0, 00:11:20.589 "data_size": 0 00:11:20.589 }, 00:11:20.589 { 00:11:20.589 "name": "BaseBdev4", 00:11:20.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.589 "is_configured": false, 00:11:20.589 "data_offset": 0, 00:11:20.589 "data_size": 0 00:11:20.589 } 00:11:20.589 ] 00:11:20.589 }' 00:11:20.589 17:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.589 17:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 [2024-10-25 17:52:39.333082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.158 BaseBdev2 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 [ 00:11:21.158 { 00:11:21.158 "name": "BaseBdev2", 00:11:21.158 "aliases": [ 00:11:21.158 "985f4170-f29c-44bd-975d-c89ab6a2405d" 00:11:21.158 ], 00:11:21.158 "product_name": "Malloc disk", 00:11:21.158 "block_size": 512, 00:11:21.158 "num_blocks": 65536, 00:11:21.158 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:21.158 "assigned_rate_limits": { 00:11:21.158 "rw_ios_per_sec": 0, 00:11:21.158 "rw_mbytes_per_sec": 0, 00:11:21.158 "r_mbytes_per_sec": 0, 00:11:21.158 "w_mbytes_per_sec": 0 00:11:21.158 }, 00:11:21.158 "claimed": true, 00:11:21.158 "claim_type": "exclusive_write", 00:11:21.158 "zoned": false, 00:11:21.158 "supported_io_types": { 00:11:21.158 "read": true, 00:11:21.158 "write": true, 00:11:21.158 "unmap": true, 00:11:21.158 "flush": true, 00:11:21.158 "reset": true, 00:11:21.158 "nvme_admin": false, 00:11:21.158 "nvme_io": false, 00:11:21.158 "nvme_io_md": false, 00:11:21.158 "write_zeroes": true, 00:11:21.158 "zcopy": true, 00:11:21.158 "get_zone_info": false, 00:11:21.158 "zone_management": false, 00:11:21.158 "zone_append": false, 00:11:21.158 "compare": false, 00:11:21.158 "compare_and_write": false, 00:11:21.158 "abort": true, 00:11:21.158 "seek_hole": false, 00:11:21.158 "seek_data": false, 00:11:21.158 "copy": true, 00:11:21.158 "nvme_iov_md": false 00:11:21.158 }, 00:11:21.158 "memory_domains": [ 00:11:21.158 { 00:11:21.158 "dma_device_id": "system", 00:11:21.158 "dma_device_type": 1 00:11:21.158 }, 00:11:21.158 { 00:11:21.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.158 "dma_device_type": 2 00:11:21.158 } 00:11:21.158 ], 00:11:21.158 "driver_specific": {} 00:11:21.158 } 00:11:21.158 ] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.158 "name": "Existed_Raid", 00:11:21.158 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:21.158 "strip_size_kb": 0, 00:11:21.158 "state": "configuring", 00:11:21.158 "raid_level": "raid1", 00:11:21.158 "superblock": true, 00:11:21.158 "num_base_bdevs": 4, 00:11:21.158 "num_base_bdevs_discovered": 2, 00:11:21.158 "num_base_bdevs_operational": 4, 00:11:21.158 "base_bdevs_list": [ 00:11:21.158 { 00:11:21.158 "name": "BaseBdev1", 00:11:21.158 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:21.158 "is_configured": true, 00:11:21.158 "data_offset": 2048, 00:11:21.158 "data_size": 63488 00:11:21.158 }, 00:11:21.158 { 00:11:21.158 "name": "BaseBdev2", 00:11:21.158 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:21.158 "is_configured": true, 00:11:21.158 "data_offset": 2048, 00:11:21.158 "data_size": 63488 00:11:21.158 }, 00:11:21.158 { 00:11:21.158 "name": "BaseBdev3", 00:11:21.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.158 "is_configured": false, 00:11:21.158 "data_offset": 0, 00:11:21.158 "data_size": 0 00:11:21.158 }, 00:11:21.158 { 00:11:21.158 "name": "BaseBdev4", 00:11:21.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.158 "is_configured": false, 00:11:21.158 "data_offset": 0, 00:11:21.158 "data_size": 0 00:11:21.158 } 00:11:21.158 ] 00:11:21.158 }' 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.158 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.730 [2024-10-25 17:52:39.912163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.730 BaseBdev3 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.730 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.731 [ 00:11:21.731 { 00:11:21.731 "name": "BaseBdev3", 00:11:21.731 "aliases": [ 00:11:21.731 "ad417e2a-1e97-4f08-a4a5-596cfd5ea253" 00:11:21.731 ], 00:11:21.731 "product_name": "Malloc disk", 00:11:21.731 "block_size": 512, 00:11:21.731 "num_blocks": 65536, 00:11:21.731 "uuid": "ad417e2a-1e97-4f08-a4a5-596cfd5ea253", 00:11:21.731 "assigned_rate_limits": { 00:11:21.731 "rw_ios_per_sec": 0, 00:11:21.731 "rw_mbytes_per_sec": 0, 00:11:21.731 "r_mbytes_per_sec": 0, 00:11:21.731 "w_mbytes_per_sec": 0 00:11:21.731 }, 00:11:21.731 "claimed": true, 00:11:21.731 "claim_type": "exclusive_write", 00:11:21.731 "zoned": false, 00:11:21.731 "supported_io_types": { 00:11:21.731 "read": true, 00:11:21.731 "write": true, 00:11:21.731 "unmap": true, 00:11:21.731 "flush": true, 00:11:21.731 "reset": true, 00:11:21.731 "nvme_admin": false, 00:11:21.731 "nvme_io": false, 00:11:21.731 "nvme_io_md": false, 00:11:21.731 "write_zeroes": true, 00:11:21.731 "zcopy": true, 00:11:21.731 "get_zone_info": false, 00:11:21.731 "zone_management": false, 00:11:21.731 "zone_append": false, 00:11:21.731 "compare": false, 00:11:21.731 "compare_and_write": false, 00:11:21.731 "abort": true, 00:11:21.731 "seek_hole": false, 00:11:21.731 "seek_data": false, 00:11:21.731 "copy": true, 00:11:21.731 "nvme_iov_md": false 00:11:21.731 }, 00:11:21.731 "memory_domains": [ 00:11:21.731 { 00:11:21.731 "dma_device_id": "system", 00:11:21.731 "dma_device_type": 1 00:11:21.731 }, 00:11:21.731 { 00:11:21.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.731 "dma_device_type": 2 00:11:21.731 } 00:11:21.731 ], 00:11:21.731 "driver_specific": {} 00:11:21.731 } 00:11:21.731 ] 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.731 17:52:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.731 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.731 "name": "Existed_Raid", 00:11:21.731 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:21.731 "strip_size_kb": 0, 00:11:21.731 "state": "configuring", 00:11:21.731 "raid_level": "raid1", 00:11:21.731 "superblock": true, 00:11:21.731 "num_base_bdevs": 4, 00:11:21.731 "num_base_bdevs_discovered": 3, 00:11:21.731 "num_base_bdevs_operational": 4, 00:11:21.731 "base_bdevs_list": [ 00:11:21.731 { 00:11:21.731 "name": "BaseBdev1", 00:11:21.731 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:21.731 "is_configured": true, 00:11:21.731 "data_offset": 2048, 00:11:21.731 "data_size": 63488 00:11:21.731 }, 00:11:21.731 { 00:11:21.731 "name": "BaseBdev2", 00:11:21.731 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:21.731 "is_configured": true, 00:11:21.731 "data_offset": 2048, 00:11:21.731 "data_size": 63488 00:11:21.731 }, 00:11:21.731 { 00:11:21.731 "name": "BaseBdev3", 00:11:21.731 "uuid": "ad417e2a-1e97-4f08-a4a5-596cfd5ea253", 00:11:21.731 "is_configured": true, 00:11:21.731 "data_offset": 2048, 00:11:21.731 "data_size": 63488 00:11:21.731 }, 00:11:21.731 { 00:11:21.731 "name": "BaseBdev4", 00:11:21.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.731 "is_configured": false, 00:11:21.731 "data_offset": 0, 00:11:21.731 "data_size": 0 00:11:21.731 } 00:11:21.731 ] 00:11:21.731 }' 00:11:21.731 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.731 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.992 [2024-10-25 17:52:40.411438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.992 [2024-10-25 17:52:40.411735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.992 [2024-10-25 17:52:40.411750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.992 [2024-10-25 17:52:40.412068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.992 BaseBdev4 00:11:21.992 [2024-10-25 17:52:40.412262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.992 [2024-10-25 17:52:40.412286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.992 [2024-10-25 17:52:40.412453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.992 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 [ 00:11:22.255 { 00:11:22.255 "name": "BaseBdev4", 00:11:22.255 "aliases": [ 00:11:22.255 "886d1190-a508-4bd6-9c64-e25efb30a6ec" 00:11:22.255 ], 00:11:22.255 "product_name": "Malloc disk", 00:11:22.255 "block_size": 512, 00:11:22.255 "num_blocks": 65536, 00:11:22.255 "uuid": "886d1190-a508-4bd6-9c64-e25efb30a6ec", 00:11:22.255 "assigned_rate_limits": { 00:11:22.255 "rw_ios_per_sec": 0, 00:11:22.255 "rw_mbytes_per_sec": 0, 00:11:22.255 "r_mbytes_per_sec": 0, 00:11:22.255 "w_mbytes_per_sec": 0 00:11:22.255 }, 00:11:22.255 "claimed": true, 00:11:22.255 "claim_type": "exclusive_write", 00:11:22.255 "zoned": false, 00:11:22.255 "supported_io_types": { 00:11:22.255 "read": true, 00:11:22.255 "write": true, 00:11:22.255 "unmap": true, 00:11:22.255 "flush": true, 00:11:22.255 "reset": true, 00:11:22.255 "nvme_admin": false, 00:11:22.255 "nvme_io": false, 00:11:22.255 "nvme_io_md": false, 00:11:22.255 "write_zeroes": true, 00:11:22.255 "zcopy": true, 00:11:22.255 "get_zone_info": false, 00:11:22.255 "zone_management": false, 00:11:22.255 "zone_append": false, 00:11:22.255 "compare": false, 00:11:22.255 "compare_and_write": false, 00:11:22.255 "abort": true, 00:11:22.255 "seek_hole": false, 00:11:22.255 "seek_data": false, 00:11:22.255 "copy": true, 00:11:22.255 "nvme_iov_md": false 00:11:22.255 }, 00:11:22.255 "memory_domains": [ 00:11:22.255 { 00:11:22.255 "dma_device_id": "system", 00:11:22.255 "dma_device_type": 1 00:11:22.255 }, 00:11:22.255 { 00:11:22.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.255 "dma_device_type": 2 00:11:22.255 } 00:11:22.255 ], 00:11:22.255 "driver_specific": {} 00:11:22.255 } 00:11:22.255 ] 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.255 "name": "Existed_Raid", 00:11:22.255 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:22.255 "strip_size_kb": 0, 00:11:22.255 "state": "online", 00:11:22.255 "raid_level": "raid1", 00:11:22.255 "superblock": true, 00:11:22.255 "num_base_bdevs": 4, 00:11:22.255 "num_base_bdevs_discovered": 4, 00:11:22.255 "num_base_bdevs_operational": 4, 00:11:22.255 "base_bdevs_list": [ 00:11:22.255 { 00:11:22.255 "name": "BaseBdev1", 00:11:22.255 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:22.255 "is_configured": true, 00:11:22.255 "data_offset": 2048, 00:11:22.255 "data_size": 63488 00:11:22.255 }, 00:11:22.255 { 00:11:22.255 "name": "BaseBdev2", 00:11:22.255 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:22.255 "is_configured": true, 00:11:22.255 "data_offset": 2048, 00:11:22.255 "data_size": 63488 00:11:22.255 }, 00:11:22.255 { 00:11:22.255 "name": "BaseBdev3", 00:11:22.255 "uuid": "ad417e2a-1e97-4f08-a4a5-596cfd5ea253", 00:11:22.255 "is_configured": true, 00:11:22.255 "data_offset": 2048, 00:11:22.255 "data_size": 63488 00:11:22.255 }, 00:11:22.255 { 00:11:22.255 "name": "BaseBdev4", 00:11:22.255 "uuid": "886d1190-a508-4bd6-9c64-e25efb30a6ec", 00:11:22.255 "is_configured": true, 00:11:22.255 "data_offset": 2048, 00:11:22.255 "data_size": 63488 00:11:22.255 } 00:11:22.255 ] 00:11:22.255 }' 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.255 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.515 [2024-10-25 17:52:40.887077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.515 "name": "Existed_Raid", 00:11:22.515 "aliases": [ 00:11:22.515 "bc6f31d0-a04d-4304-bdac-8209c404183c" 00:11:22.515 ], 00:11:22.515 "product_name": "Raid Volume", 00:11:22.515 "block_size": 512, 00:11:22.515 "num_blocks": 63488, 00:11:22.515 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:22.515 "assigned_rate_limits": { 00:11:22.515 "rw_ios_per_sec": 0, 00:11:22.515 "rw_mbytes_per_sec": 0, 00:11:22.515 "r_mbytes_per_sec": 0, 00:11:22.515 "w_mbytes_per_sec": 0 00:11:22.515 }, 00:11:22.515 "claimed": false, 00:11:22.515 "zoned": false, 00:11:22.515 "supported_io_types": { 00:11:22.515 "read": true, 00:11:22.515 "write": true, 00:11:22.515 "unmap": false, 00:11:22.515 "flush": false, 00:11:22.515 "reset": true, 00:11:22.515 "nvme_admin": false, 00:11:22.515 "nvme_io": false, 00:11:22.515 "nvme_io_md": false, 00:11:22.515 "write_zeroes": true, 00:11:22.515 "zcopy": false, 00:11:22.515 "get_zone_info": false, 00:11:22.515 "zone_management": false, 00:11:22.515 "zone_append": false, 00:11:22.515 "compare": false, 00:11:22.515 "compare_and_write": false, 00:11:22.515 "abort": false, 00:11:22.515 "seek_hole": false, 00:11:22.515 "seek_data": false, 00:11:22.515 "copy": false, 00:11:22.515 "nvme_iov_md": false 00:11:22.515 }, 00:11:22.515 "memory_domains": [ 00:11:22.515 { 00:11:22.515 "dma_device_id": "system", 00:11:22.515 "dma_device_type": 1 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.515 "dma_device_type": 2 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "system", 00:11:22.515 "dma_device_type": 1 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.515 "dma_device_type": 2 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "system", 00:11:22.515 "dma_device_type": 1 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.515 "dma_device_type": 2 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "system", 00:11:22.515 "dma_device_type": 1 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.515 "dma_device_type": 2 00:11:22.515 } 00:11:22.515 ], 00:11:22.515 "driver_specific": { 00:11:22.515 "raid": { 00:11:22.515 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:22.515 "strip_size_kb": 0, 00:11:22.515 "state": "online", 00:11:22.515 "raid_level": "raid1", 00:11:22.515 "superblock": true, 00:11:22.515 "num_base_bdevs": 4, 00:11:22.515 "num_base_bdevs_discovered": 4, 00:11:22.515 "num_base_bdevs_operational": 4, 00:11:22.515 "base_bdevs_list": [ 00:11:22.515 { 00:11:22.515 "name": "BaseBdev1", 00:11:22.515 "uuid": "d25cd10d-c695-4d32-bbf9-acfabcd40625", 00:11:22.515 "is_configured": true, 00:11:22.515 "data_offset": 2048, 00:11:22.515 "data_size": 63488 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "name": "BaseBdev2", 00:11:22.515 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:22.515 "is_configured": true, 00:11:22.515 "data_offset": 2048, 00:11:22.515 "data_size": 63488 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "name": "BaseBdev3", 00:11:22.515 "uuid": "ad417e2a-1e97-4f08-a4a5-596cfd5ea253", 00:11:22.515 "is_configured": true, 00:11:22.515 "data_offset": 2048, 00:11:22.515 "data_size": 63488 00:11:22.515 }, 00:11:22.515 { 00:11:22.515 "name": "BaseBdev4", 00:11:22.515 "uuid": "886d1190-a508-4bd6-9c64-e25efb30a6ec", 00:11:22.515 "is_configured": true, 00:11:22.515 "data_offset": 2048, 00:11:22.515 "data_size": 63488 00:11:22.515 } 00:11:22.515 ] 00:11:22.515 } 00:11:22.515 } 00:11:22.515 }' 00:11:22.515 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.775 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:22.775 BaseBdev2 00:11:22.775 BaseBdev3 00:11:22.775 BaseBdev4' 00:11:22.775 17:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.775 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.035 [2024-10-25 17:52:41.218235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.035 "name": "Existed_Raid", 00:11:23.035 "uuid": "bc6f31d0-a04d-4304-bdac-8209c404183c", 00:11:23.035 "strip_size_kb": 0, 00:11:23.035 "state": "online", 00:11:23.035 "raid_level": "raid1", 00:11:23.035 "superblock": true, 00:11:23.035 "num_base_bdevs": 4, 00:11:23.035 "num_base_bdevs_discovered": 3, 00:11:23.035 "num_base_bdevs_operational": 3, 00:11:23.035 "base_bdevs_list": [ 00:11:23.035 { 00:11:23.035 "name": null, 00:11:23.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.035 "is_configured": false, 00:11:23.035 "data_offset": 0, 00:11:23.035 "data_size": 63488 00:11:23.035 }, 00:11:23.035 { 00:11:23.035 "name": "BaseBdev2", 00:11:23.035 "uuid": "985f4170-f29c-44bd-975d-c89ab6a2405d", 00:11:23.035 "is_configured": true, 00:11:23.035 "data_offset": 2048, 00:11:23.035 "data_size": 63488 00:11:23.035 }, 00:11:23.035 { 00:11:23.035 "name": "BaseBdev3", 00:11:23.035 "uuid": "ad417e2a-1e97-4f08-a4a5-596cfd5ea253", 00:11:23.035 "is_configured": true, 00:11:23.035 "data_offset": 2048, 00:11:23.035 "data_size": 63488 00:11:23.035 }, 00:11:23.035 { 00:11:23.035 "name": "BaseBdev4", 00:11:23.035 "uuid": "886d1190-a508-4bd6-9c64-e25efb30a6ec", 00:11:23.035 "is_configured": true, 00:11:23.035 "data_offset": 2048, 00:11:23.035 "data_size": 63488 00:11:23.035 } 00:11:23.035 ] 00:11:23.035 }' 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.035 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.602 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.602 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.603 [2024-10-25 17:52:41.824939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.603 17:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.603 [2024-10-25 17:52:41.982357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.863 [2024-10-25 17:52:42.146025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:23.863 [2024-10-25 17:52:42.146193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.863 [2024-10-25 17:52:42.246221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.863 [2024-10-25 17:52:42.246358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.863 [2024-10-25 17:52:42.246402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.863 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.123 BaseBdev2 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.123 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.123 [ 00:11:24.123 { 00:11:24.123 "name": "BaseBdev2", 00:11:24.123 "aliases": [ 00:11:24.123 "c7370096-109e-4258-ad3d-e7d874c2cd92" 00:11:24.123 ], 00:11:24.123 "product_name": "Malloc disk", 00:11:24.123 "block_size": 512, 00:11:24.123 "num_blocks": 65536, 00:11:24.123 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:24.123 "assigned_rate_limits": { 00:11:24.123 "rw_ios_per_sec": 0, 00:11:24.123 "rw_mbytes_per_sec": 0, 00:11:24.123 "r_mbytes_per_sec": 0, 00:11:24.123 "w_mbytes_per_sec": 0 00:11:24.123 }, 00:11:24.123 "claimed": false, 00:11:24.123 "zoned": false, 00:11:24.123 "supported_io_types": { 00:11:24.123 "read": true, 00:11:24.123 "write": true, 00:11:24.123 "unmap": true, 00:11:24.123 "flush": true, 00:11:24.123 "reset": true, 00:11:24.123 "nvme_admin": false, 00:11:24.123 "nvme_io": false, 00:11:24.123 "nvme_io_md": false, 00:11:24.123 "write_zeroes": true, 00:11:24.123 "zcopy": true, 00:11:24.123 "get_zone_info": false, 00:11:24.123 "zone_management": false, 00:11:24.123 "zone_append": false, 00:11:24.124 "compare": false, 00:11:24.124 "compare_and_write": false, 00:11:24.124 "abort": true, 00:11:24.124 "seek_hole": false, 00:11:24.124 "seek_data": false, 00:11:24.124 "copy": true, 00:11:24.124 "nvme_iov_md": false 00:11:24.124 }, 00:11:24.124 "memory_domains": [ 00:11:24.124 { 00:11:24.124 "dma_device_id": "system", 00:11:24.124 "dma_device_type": 1 00:11:24.124 }, 00:11:24.124 { 00:11:24.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.124 "dma_device_type": 2 00:11:24.124 } 00:11:24.124 ], 00:11:24.124 "driver_specific": {} 00:11:24.124 } 00:11:24.124 ] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 BaseBdev3 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 [ 00:11:24.124 { 00:11:24.124 "name": "BaseBdev3", 00:11:24.124 "aliases": [ 00:11:24.124 "cc0d7161-3a41-4c4b-8560-87191345bd4b" 00:11:24.124 ], 00:11:24.124 "product_name": "Malloc disk", 00:11:24.124 "block_size": 512, 00:11:24.124 "num_blocks": 65536, 00:11:24.124 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:24.124 "assigned_rate_limits": { 00:11:24.124 "rw_ios_per_sec": 0, 00:11:24.124 "rw_mbytes_per_sec": 0, 00:11:24.124 "r_mbytes_per_sec": 0, 00:11:24.124 "w_mbytes_per_sec": 0 00:11:24.124 }, 00:11:24.124 "claimed": false, 00:11:24.124 "zoned": false, 00:11:24.124 "supported_io_types": { 00:11:24.124 "read": true, 00:11:24.124 "write": true, 00:11:24.124 "unmap": true, 00:11:24.124 "flush": true, 00:11:24.124 "reset": true, 00:11:24.124 "nvme_admin": false, 00:11:24.124 "nvme_io": false, 00:11:24.124 "nvme_io_md": false, 00:11:24.124 "write_zeroes": true, 00:11:24.124 "zcopy": true, 00:11:24.124 "get_zone_info": false, 00:11:24.124 "zone_management": false, 00:11:24.124 "zone_append": false, 00:11:24.124 "compare": false, 00:11:24.124 "compare_and_write": false, 00:11:24.124 "abort": true, 00:11:24.124 "seek_hole": false, 00:11:24.124 "seek_data": false, 00:11:24.124 "copy": true, 00:11:24.124 "nvme_iov_md": false 00:11:24.124 }, 00:11:24.124 "memory_domains": [ 00:11:24.124 { 00:11:24.124 "dma_device_id": "system", 00:11:24.124 "dma_device_type": 1 00:11:24.124 }, 00:11:24.124 { 00:11:24.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.124 "dma_device_type": 2 00:11:24.124 } 00:11:24.124 ], 00:11:24.124 "driver_specific": {} 00:11:24.124 } 00:11:24.124 ] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 BaseBdev4 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.124 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 [ 00:11:24.124 { 00:11:24.124 "name": "BaseBdev4", 00:11:24.124 "aliases": [ 00:11:24.124 "08720046-e086-41d8-9889-e3807218f0ea" 00:11:24.124 ], 00:11:24.124 "product_name": "Malloc disk", 00:11:24.124 "block_size": 512, 00:11:24.124 "num_blocks": 65536, 00:11:24.124 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:24.124 "assigned_rate_limits": { 00:11:24.124 "rw_ios_per_sec": 0, 00:11:24.124 "rw_mbytes_per_sec": 0, 00:11:24.124 "r_mbytes_per_sec": 0, 00:11:24.124 "w_mbytes_per_sec": 0 00:11:24.124 }, 00:11:24.124 "claimed": false, 00:11:24.124 "zoned": false, 00:11:24.124 "supported_io_types": { 00:11:24.124 "read": true, 00:11:24.124 "write": true, 00:11:24.124 "unmap": true, 00:11:24.124 "flush": true, 00:11:24.124 "reset": true, 00:11:24.124 "nvme_admin": false, 00:11:24.124 "nvme_io": false, 00:11:24.124 "nvme_io_md": false, 00:11:24.124 "write_zeroes": true, 00:11:24.124 "zcopy": true, 00:11:24.124 "get_zone_info": false, 00:11:24.124 "zone_management": false, 00:11:24.124 "zone_append": false, 00:11:24.125 "compare": false, 00:11:24.125 "compare_and_write": false, 00:11:24.125 "abort": true, 00:11:24.125 "seek_hole": false, 00:11:24.125 "seek_data": false, 00:11:24.125 "copy": true, 00:11:24.125 "nvme_iov_md": false 00:11:24.125 }, 00:11:24.125 "memory_domains": [ 00:11:24.125 { 00:11:24.125 "dma_device_id": "system", 00:11:24.125 "dma_device_type": 1 00:11:24.125 }, 00:11:24.385 { 00:11:24.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.385 "dma_device_type": 2 00:11:24.385 } 00:11:24.385 ], 00:11:24.385 "driver_specific": {} 00:11:24.385 } 00:11:24.385 ] 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.385 [2024-10-25 17:52:42.570802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.385 [2024-10-25 17:52:42.570860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.385 [2024-10-25 17:52:42.570884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.385 [2024-10-25 17:52:42.572746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.385 [2024-10-25 17:52:42.572794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.385 "name": "Existed_Raid", 00:11:24.385 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:24.385 "strip_size_kb": 0, 00:11:24.385 "state": "configuring", 00:11:24.385 "raid_level": "raid1", 00:11:24.385 "superblock": true, 00:11:24.385 "num_base_bdevs": 4, 00:11:24.385 "num_base_bdevs_discovered": 3, 00:11:24.385 "num_base_bdevs_operational": 4, 00:11:24.385 "base_bdevs_list": [ 00:11:24.385 { 00:11:24.385 "name": "BaseBdev1", 00:11:24.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.385 "is_configured": false, 00:11:24.385 "data_offset": 0, 00:11:24.385 "data_size": 0 00:11:24.385 }, 00:11:24.385 { 00:11:24.385 "name": "BaseBdev2", 00:11:24.385 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:24.385 "is_configured": true, 00:11:24.385 "data_offset": 2048, 00:11:24.385 "data_size": 63488 00:11:24.385 }, 00:11:24.385 { 00:11:24.385 "name": "BaseBdev3", 00:11:24.385 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:24.385 "is_configured": true, 00:11:24.385 "data_offset": 2048, 00:11:24.385 "data_size": 63488 00:11:24.385 }, 00:11:24.385 { 00:11:24.385 "name": "BaseBdev4", 00:11:24.385 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:24.385 "is_configured": true, 00:11:24.385 "data_offset": 2048, 00:11:24.385 "data_size": 63488 00:11:24.385 } 00:11:24.385 ] 00:11:24.385 }' 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.385 17:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.644 [2024-10-25 17:52:43.030029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.644 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.645 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.903 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.903 "name": "Existed_Raid", 00:11:24.903 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:24.903 "strip_size_kb": 0, 00:11:24.903 "state": "configuring", 00:11:24.903 "raid_level": "raid1", 00:11:24.903 "superblock": true, 00:11:24.904 "num_base_bdevs": 4, 00:11:24.904 "num_base_bdevs_discovered": 2, 00:11:24.904 "num_base_bdevs_operational": 4, 00:11:24.904 "base_bdevs_list": [ 00:11:24.904 { 00:11:24.904 "name": "BaseBdev1", 00:11:24.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.904 "is_configured": false, 00:11:24.904 "data_offset": 0, 00:11:24.904 "data_size": 0 00:11:24.904 }, 00:11:24.904 { 00:11:24.904 "name": null, 00:11:24.904 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:24.904 "is_configured": false, 00:11:24.904 "data_offset": 0, 00:11:24.904 "data_size": 63488 00:11:24.904 }, 00:11:24.904 { 00:11:24.904 "name": "BaseBdev3", 00:11:24.904 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:24.904 "is_configured": true, 00:11:24.904 "data_offset": 2048, 00:11:24.904 "data_size": 63488 00:11:24.904 }, 00:11:24.904 { 00:11:24.904 "name": "BaseBdev4", 00:11:24.904 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:24.904 "is_configured": true, 00:11:24.904 "data_offset": 2048, 00:11:24.904 "data_size": 63488 00:11:24.904 } 00:11:24.904 ] 00:11:24.904 }' 00:11:24.904 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.904 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.162 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.162 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.162 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.162 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.163 [2024-10-25 17:52:43.533642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.163 BaseBdev1 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.163 [ 00:11:25.163 { 00:11:25.163 "name": "BaseBdev1", 00:11:25.163 "aliases": [ 00:11:25.163 "3d82824a-468a-47b3-b9d6-0e977d730290" 00:11:25.163 ], 00:11:25.163 "product_name": "Malloc disk", 00:11:25.163 "block_size": 512, 00:11:25.163 "num_blocks": 65536, 00:11:25.163 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:25.163 "assigned_rate_limits": { 00:11:25.163 "rw_ios_per_sec": 0, 00:11:25.163 "rw_mbytes_per_sec": 0, 00:11:25.163 "r_mbytes_per_sec": 0, 00:11:25.163 "w_mbytes_per_sec": 0 00:11:25.163 }, 00:11:25.163 "claimed": true, 00:11:25.163 "claim_type": "exclusive_write", 00:11:25.163 "zoned": false, 00:11:25.163 "supported_io_types": { 00:11:25.163 "read": true, 00:11:25.163 "write": true, 00:11:25.163 "unmap": true, 00:11:25.163 "flush": true, 00:11:25.163 "reset": true, 00:11:25.163 "nvme_admin": false, 00:11:25.163 "nvme_io": false, 00:11:25.163 "nvme_io_md": false, 00:11:25.163 "write_zeroes": true, 00:11:25.163 "zcopy": true, 00:11:25.163 "get_zone_info": false, 00:11:25.163 "zone_management": false, 00:11:25.163 "zone_append": false, 00:11:25.163 "compare": false, 00:11:25.163 "compare_and_write": false, 00:11:25.163 "abort": true, 00:11:25.163 "seek_hole": false, 00:11:25.163 "seek_data": false, 00:11:25.163 "copy": true, 00:11:25.163 "nvme_iov_md": false 00:11:25.163 }, 00:11:25.163 "memory_domains": [ 00:11:25.163 { 00:11:25.163 "dma_device_id": "system", 00:11:25.163 "dma_device_type": 1 00:11:25.163 }, 00:11:25.163 { 00:11:25.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.163 "dma_device_type": 2 00:11:25.163 } 00:11:25.163 ], 00:11:25.163 "driver_specific": {} 00:11:25.163 } 00:11:25.163 ] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.163 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.423 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.423 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.423 "name": "Existed_Raid", 00:11:25.423 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:25.423 "strip_size_kb": 0, 00:11:25.423 "state": "configuring", 00:11:25.423 "raid_level": "raid1", 00:11:25.423 "superblock": true, 00:11:25.423 "num_base_bdevs": 4, 00:11:25.423 "num_base_bdevs_discovered": 3, 00:11:25.423 "num_base_bdevs_operational": 4, 00:11:25.423 "base_bdevs_list": [ 00:11:25.423 { 00:11:25.423 "name": "BaseBdev1", 00:11:25.423 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:25.423 "is_configured": true, 00:11:25.423 "data_offset": 2048, 00:11:25.423 "data_size": 63488 00:11:25.423 }, 00:11:25.423 { 00:11:25.423 "name": null, 00:11:25.423 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:25.423 "is_configured": false, 00:11:25.423 "data_offset": 0, 00:11:25.423 "data_size": 63488 00:11:25.423 }, 00:11:25.423 { 00:11:25.423 "name": "BaseBdev3", 00:11:25.423 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:25.423 "is_configured": true, 00:11:25.423 "data_offset": 2048, 00:11:25.423 "data_size": 63488 00:11:25.423 }, 00:11:25.423 { 00:11:25.423 "name": "BaseBdev4", 00:11:25.423 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:25.423 "is_configured": true, 00:11:25.423 "data_offset": 2048, 00:11:25.423 "data_size": 63488 00:11:25.423 } 00:11:25.423 ] 00:11:25.423 }' 00:11:25.423 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.423 17:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.684 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.684 17:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.684 [2024-10-25 17:52:44.032909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.684 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.685 "name": "Existed_Raid", 00:11:25.685 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:25.685 "strip_size_kb": 0, 00:11:25.685 "state": "configuring", 00:11:25.685 "raid_level": "raid1", 00:11:25.685 "superblock": true, 00:11:25.685 "num_base_bdevs": 4, 00:11:25.685 "num_base_bdevs_discovered": 2, 00:11:25.685 "num_base_bdevs_operational": 4, 00:11:25.685 "base_bdevs_list": [ 00:11:25.685 { 00:11:25.685 "name": "BaseBdev1", 00:11:25.685 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:25.685 "is_configured": true, 00:11:25.685 "data_offset": 2048, 00:11:25.685 "data_size": 63488 00:11:25.685 }, 00:11:25.685 { 00:11:25.685 "name": null, 00:11:25.685 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:25.685 "is_configured": false, 00:11:25.685 "data_offset": 0, 00:11:25.685 "data_size": 63488 00:11:25.685 }, 00:11:25.685 { 00:11:25.685 "name": null, 00:11:25.685 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:25.685 "is_configured": false, 00:11:25.685 "data_offset": 0, 00:11:25.685 "data_size": 63488 00:11:25.685 }, 00:11:25.685 { 00:11:25.685 "name": "BaseBdev4", 00:11:25.685 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:25.685 "is_configured": true, 00:11:25.685 "data_offset": 2048, 00:11:25.685 "data_size": 63488 00:11:25.685 } 00:11:25.685 ] 00:11:25.685 }' 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.685 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 [2024-10-25 17:52:44.576019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.260 "name": "Existed_Raid", 00:11:26.260 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:26.260 "strip_size_kb": 0, 00:11:26.260 "state": "configuring", 00:11:26.260 "raid_level": "raid1", 00:11:26.260 "superblock": true, 00:11:26.260 "num_base_bdevs": 4, 00:11:26.260 "num_base_bdevs_discovered": 3, 00:11:26.260 "num_base_bdevs_operational": 4, 00:11:26.260 "base_bdevs_list": [ 00:11:26.260 { 00:11:26.260 "name": "BaseBdev1", 00:11:26.260 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:26.260 "is_configured": true, 00:11:26.260 "data_offset": 2048, 00:11:26.260 "data_size": 63488 00:11:26.260 }, 00:11:26.260 { 00:11:26.260 "name": null, 00:11:26.260 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:26.260 "is_configured": false, 00:11:26.260 "data_offset": 0, 00:11:26.260 "data_size": 63488 00:11:26.260 }, 00:11:26.260 { 00:11:26.260 "name": "BaseBdev3", 00:11:26.260 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:26.260 "is_configured": true, 00:11:26.260 "data_offset": 2048, 00:11:26.260 "data_size": 63488 00:11:26.260 }, 00:11:26.260 { 00:11:26.260 "name": "BaseBdev4", 00:11:26.260 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:26.260 "is_configured": true, 00:11:26.260 "data_offset": 2048, 00:11:26.260 "data_size": 63488 00:11:26.260 } 00:11:26.260 ] 00:11:26.260 }' 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.260 17:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.840 [2024-10-25 17:52:45.087194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.840 "name": "Existed_Raid", 00:11:26.840 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:26.840 "strip_size_kb": 0, 00:11:26.840 "state": "configuring", 00:11:26.840 "raid_level": "raid1", 00:11:26.840 "superblock": true, 00:11:26.840 "num_base_bdevs": 4, 00:11:26.840 "num_base_bdevs_discovered": 2, 00:11:26.840 "num_base_bdevs_operational": 4, 00:11:26.840 "base_bdevs_list": [ 00:11:26.840 { 00:11:26.840 "name": null, 00:11:26.840 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:26.840 "is_configured": false, 00:11:26.840 "data_offset": 0, 00:11:26.840 "data_size": 63488 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": null, 00:11:26.840 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:26.840 "is_configured": false, 00:11:26.840 "data_offset": 0, 00:11:26.840 "data_size": 63488 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": "BaseBdev3", 00:11:26.840 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:26.840 "is_configured": true, 00:11:26.840 "data_offset": 2048, 00:11:26.840 "data_size": 63488 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": "BaseBdev4", 00:11:26.840 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:26.840 "is_configured": true, 00:11:26.840 "data_offset": 2048, 00:11:26.840 "data_size": 63488 00:11:26.840 } 00:11:26.840 ] 00:11:26.840 }' 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.840 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.410 [2024-10-25 17:52:45.699745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.410 "name": "Existed_Raid", 00:11:27.410 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:27.410 "strip_size_kb": 0, 00:11:27.410 "state": "configuring", 00:11:27.410 "raid_level": "raid1", 00:11:27.410 "superblock": true, 00:11:27.410 "num_base_bdevs": 4, 00:11:27.410 "num_base_bdevs_discovered": 3, 00:11:27.410 "num_base_bdevs_operational": 4, 00:11:27.410 "base_bdevs_list": [ 00:11:27.410 { 00:11:27.410 "name": null, 00:11:27.410 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:27.410 "is_configured": false, 00:11:27.410 "data_offset": 0, 00:11:27.410 "data_size": 63488 00:11:27.410 }, 00:11:27.410 { 00:11:27.410 "name": "BaseBdev2", 00:11:27.410 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:27.410 "is_configured": true, 00:11:27.410 "data_offset": 2048, 00:11:27.410 "data_size": 63488 00:11:27.410 }, 00:11:27.410 { 00:11:27.410 "name": "BaseBdev3", 00:11:27.410 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:27.410 "is_configured": true, 00:11:27.410 "data_offset": 2048, 00:11:27.410 "data_size": 63488 00:11:27.410 }, 00:11:27.410 { 00:11:27.410 "name": "BaseBdev4", 00:11:27.410 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:27.410 "is_configured": true, 00:11:27.410 "data_offset": 2048, 00:11:27.410 "data_size": 63488 00:11:27.410 } 00:11:27.410 ] 00:11:27.410 }' 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.410 17:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d82824a-468a-47b3-b9d6-0e977d730290 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 [2024-10-25 17:52:46.313370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:27.978 [2024-10-25 17:52:46.313663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:27.978 [2024-10-25 17:52:46.313688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.978 NewBaseBdev 00:11:27.978 [2024-10-25 17:52:46.314033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:27.978 [2024-10-25 17:52:46.314222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:27.978 [2024-10-25 17:52:46.314234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:27.978 [2024-10-25 17:52:46.314414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.978 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.978 [ 00:11:27.978 { 00:11:27.978 "name": "NewBaseBdev", 00:11:27.978 "aliases": [ 00:11:27.978 "3d82824a-468a-47b3-b9d6-0e977d730290" 00:11:27.978 ], 00:11:27.978 "product_name": "Malloc disk", 00:11:27.978 "block_size": 512, 00:11:27.979 "num_blocks": 65536, 00:11:27.979 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:27.979 "assigned_rate_limits": { 00:11:27.979 "rw_ios_per_sec": 0, 00:11:27.979 "rw_mbytes_per_sec": 0, 00:11:27.979 "r_mbytes_per_sec": 0, 00:11:27.979 "w_mbytes_per_sec": 0 00:11:27.979 }, 00:11:27.979 "claimed": true, 00:11:27.979 "claim_type": "exclusive_write", 00:11:27.979 "zoned": false, 00:11:27.979 "supported_io_types": { 00:11:27.979 "read": true, 00:11:27.979 "write": true, 00:11:27.979 "unmap": true, 00:11:27.979 "flush": true, 00:11:27.979 "reset": true, 00:11:27.979 "nvme_admin": false, 00:11:27.979 "nvme_io": false, 00:11:27.979 "nvme_io_md": false, 00:11:27.979 "write_zeroes": true, 00:11:27.979 "zcopy": true, 00:11:27.979 "get_zone_info": false, 00:11:27.979 "zone_management": false, 00:11:27.979 "zone_append": false, 00:11:27.979 "compare": false, 00:11:27.979 "compare_and_write": false, 00:11:27.979 "abort": true, 00:11:27.979 "seek_hole": false, 00:11:27.979 "seek_data": false, 00:11:27.979 "copy": true, 00:11:27.979 "nvme_iov_md": false 00:11:27.979 }, 00:11:27.979 "memory_domains": [ 00:11:27.979 { 00:11:27.979 "dma_device_id": "system", 00:11:27.979 "dma_device_type": 1 00:11:27.979 }, 00:11:27.979 { 00:11:27.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.979 "dma_device_type": 2 00:11:27.979 } 00:11:27.979 ], 00:11:27.979 "driver_specific": {} 00:11:27.979 } 00:11:27.979 ] 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.979 "name": "Existed_Raid", 00:11:27.979 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:27.979 "strip_size_kb": 0, 00:11:27.979 "state": "online", 00:11:27.979 "raid_level": "raid1", 00:11:27.979 "superblock": true, 00:11:27.979 "num_base_bdevs": 4, 00:11:27.979 "num_base_bdevs_discovered": 4, 00:11:27.979 "num_base_bdevs_operational": 4, 00:11:27.979 "base_bdevs_list": [ 00:11:27.979 { 00:11:27.979 "name": "NewBaseBdev", 00:11:27.979 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:27.979 "is_configured": true, 00:11:27.979 "data_offset": 2048, 00:11:27.979 "data_size": 63488 00:11:27.979 }, 00:11:27.979 { 00:11:27.979 "name": "BaseBdev2", 00:11:27.979 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:27.979 "is_configured": true, 00:11:27.979 "data_offset": 2048, 00:11:27.979 "data_size": 63488 00:11:27.979 }, 00:11:27.979 { 00:11:27.979 "name": "BaseBdev3", 00:11:27.979 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:27.979 "is_configured": true, 00:11:27.979 "data_offset": 2048, 00:11:27.979 "data_size": 63488 00:11:27.979 }, 00:11:27.979 { 00:11:27.979 "name": "BaseBdev4", 00:11:27.979 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:27.979 "is_configured": true, 00:11:27.979 "data_offset": 2048, 00:11:27.979 "data_size": 63488 00:11:27.979 } 00:11:27.979 ] 00:11:27.979 }' 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.979 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.548 [2024-10-25 17:52:46.837024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.548 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.548 "name": "Existed_Raid", 00:11:28.548 "aliases": [ 00:11:28.548 "9719e922-79c7-4f2b-a664-cbb0ad12ea5a" 00:11:28.548 ], 00:11:28.548 "product_name": "Raid Volume", 00:11:28.548 "block_size": 512, 00:11:28.548 "num_blocks": 63488, 00:11:28.548 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:28.548 "assigned_rate_limits": { 00:11:28.548 "rw_ios_per_sec": 0, 00:11:28.548 "rw_mbytes_per_sec": 0, 00:11:28.548 "r_mbytes_per_sec": 0, 00:11:28.548 "w_mbytes_per_sec": 0 00:11:28.548 }, 00:11:28.548 "claimed": false, 00:11:28.548 "zoned": false, 00:11:28.548 "supported_io_types": { 00:11:28.548 "read": true, 00:11:28.548 "write": true, 00:11:28.548 "unmap": false, 00:11:28.548 "flush": false, 00:11:28.548 "reset": true, 00:11:28.548 "nvme_admin": false, 00:11:28.548 "nvme_io": false, 00:11:28.548 "nvme_io_md": false, 00:11:28.548 "write_zeroes": true, 00:11:28.548 "zcopy": false, 00:11:28.548 "get_zone_info": false, 00:11:28.548 "zone_management": false, 00:11:28.548 "zone_append": false, 00:11:28.548 "compare": false, 00:11:28.548 "compare_and_write": false, 00:11:28.548 "abort": false, 00:11:28.548 "seek_hole": false, 00:11:28.548 "seek_data": false, 00:11:28.548 "copy": false, 00:11:28.548 "nvme_iov_md": false 00:11:28.548 }, 00:11:28.548 "memory_domains": [ 00:11:28.548 { 00:11:28.548 "dma_device_id": "system", 00:11:28.548 "dma_device_type": 1 00:11:28.548 }, 00:11:28.548 { 00:11:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.548 "dma_device_type": 2 00:11:28.548 }, 00:11:28.548 { 00:11:28.548 "dma_device_id": "system", 00:11:28.548 "dma_device_type": 1 00:11:28.548 }, 00:11:28.548 { 00:11:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.548 "dma_device_type": 2 00:11:28.548 }, 00:11:28.548 { 00:11:28.548 "dma_device_id": "system", 00:11:28.549 "dma_device_type": 1 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.549 "dma_device_type": 2 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "dma_device_id": "system", 00:11:28.549 "dma_device_type": 1 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.549 "dma_device_type": 2 00:11:28.549 } 00:11:28.549 ], 00:11:28.549 "driver_specific": { 00:11:28.549 "raid": { 00:11:28.549 "uuid": "9719e922-79c7-4f2b-a664-cbb0ad12ea5a", 00:11:28.549 "strip_size_kb": 0, 00:11:28.549 "state": "online", 00:11:28.549 "raid_level": "raid1", 00:11:28.549 "superblock": true, 00:11:28.549 "num_base_bdevs": 4, 00:11:28.549 "num_base_bdevs_discovered": 4, 00:11:28.549 "num_base_bdevs_operational": 4, 00:11:28.549 "base_bdevs_list": [ 00:11:28.549 { 00:11:28.549 "name": "NewBaseBdev", 00:11:28.549 "uuid": "3d82824a-468a-47b3-b9d6-0e977d730290", 00:11:28.549 "is_configured": true, 00:11:28.549 "data_offset": 2048, 00:11:28.549 "data_size": 63488 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "name": "BaseBdev2", 00:11:28.549 "uuid": "c7370096-109e-4258-ad3d-e7d874c2cd92", 00:11:28.549 "is_configured": true, 00:11:28.549 "data_offset": 2048, 00:11:28.549 "data_size": 63488 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "name": "BaseBdev3", 00:11:28.549 "uuid": "cc0d7161-3a41-4c4b-8560-87191345bd4b", 00:11:28.549 "is_configured": true, 00:11:28.549 "data_offset": 2048, 00:11:28.549 "data_size": 63488 00:11:28.549 }, 00:11:28.549 { 00:11:28.549 "name": "BaseBdev4", 00:11:28.549 "uuid": "08720046-e086-41d8-9889-e3807218f0ea", 00:11:28.549 "is_configured": true, 00:11:28.549 "data_offset": 2048, 00:11:28.549 "data_size": 63488 00:11:28.549 } 00:11:28.549 ] 00:11:28.549 } 00:11:28.549 } 00:11:28.549 }' 00:11:28.549 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.549 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.549 BaseBdev2 00:11:28.549 BaseBdev3 00:11:28.549 BaseBdev4' 00:11:28.549 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.549 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.549 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.808 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:28.809 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.809 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.809 17:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.809 17:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.809 [2024-10-25 17:52:47.160205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.809 [2024-10-25 17:52:47.160246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.809 [2024-10-25 17:52:47.160350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.809 [2024-10-25 17:52:47.160696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.809 [2024-10-25 17:52:47.160721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73583 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73583 ']' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73583 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73583 00:11:28.809 killing process with pid 73583 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73583' 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73583 00:11:28.809 [2024-10-25 17:52:47.207333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.809 17:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73583 00:11:29.377 [2024-10-25 17:52:47.706064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.755 17:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.755 00:11:30.755 real 0m12.075s 00:11:30.755 user 0m18.993s 00:11:30.755 sys 0m2.199s 00:11:30.755 17:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.755 17:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 ************************************ 00:11:30.755 END TEST raid_state_function_test_sb 00:11:30.755 ************************************ 00:11:30.755 17:52:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:30.755 17:52:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.755 17:52:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.755 17:52:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 ************************************ 00:11:30.755 START TEST raid_superblock_test 00:11:30.755 ************************************ 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:30.755 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74254 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74254 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74254 ']' 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.756 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.756 [2024-10-25 17:52:49.129489] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:30.756 [2024-10-25 17:52:49.129644] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74254 ] 00:11:31.015 [2024-10-25 17:52:49.289918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.015 [2024-10-25 17:52:49.432496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.273 [2024-10-25 17:52:49.678909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.273 [2024-10-25 17:52:49.678995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.842 17:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.842 malloc1 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.842 [2024-10-25 17:52:50.049937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:31.842 [2024-10-25 17:52:50.050031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.842 [2024-10-25 17:52:50.050060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:31.842 [2024-10-25 17:52:50.050071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.842 [2024-10-25 17:52:50.052684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.842 [2024-10-25 17:52:50.052725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:31.842 pt1 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.842 malloc2 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.842 [2024-10-25 17:52:50.113225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.842 [2024-10-25 17:52:50.113307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.842 [2024-10-25 17:52:50.113334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:31.842 [2024-10-25 17:52:50.113344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.842 [2024-10-25 17:52:50.115937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.842 [2024-10-25 17:52:50.115973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.842 pt2 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:31.842 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 malloc3 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 [2024-10-25 17:52:50.190662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:31.843 [2024-10-25 17:52:50.190742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.843 [2024-10-25 17:52:50.190769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:31.843 [2024-10-25 17:52:50.190779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.843 [2024-10-25 17:52:50.193374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.843 [2024-10-25 17:52:50.193416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:31.843 pt3 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 malloc4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 [2024-10-25 17:52:50.254556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:31.843 [2024-10-25 17:52:50.254623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.843 [2024-10-25 17:52:50.254647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:31.843 [2024-10-25 17:52:50.254657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.843 [2024-10-25 17:52:50.257280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.843 [2024-10-25 17:52:50.257318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:31.843 pt4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 [2024-10-25 17:52:50.270592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:31.843 [2024-10-25 17:52:50.272955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:31.843 [2024-10-25 17:52:50.273033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:31.843 [2024-10-25 17:52:50.273083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:31.843 [2024-10-25 17:52:50.273317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:31.843 [2024-10-25 17:52:50.273345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.843 [2024-10-25 17:52:50.273715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:31.843 [2024-10-25 17:52:50.273935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:31.843 [2024-10-25 17:52:50.273960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:31.843 [2024-10-25 17:52:50.274151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.843 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.102 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.102 "name": "raid_bdev1", 00:11:32.102 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:32.102 "strip_size_kb": 0, 00:11:32.102 "state": "online", 00:11:32.102 "raid_level": "raid1", 00:11:32.102 "superblock": true, 00:11:32.102 "num_base_bdevs": 4, 00:11:32.102 "num_base_bdevs_discovered": 4, 00:11:32.102 "num_base_bdevs_operational": 4, 00:11:32.102 "base_bdevs_list": [ 00:11:32.102 { 00:11:32.102 "name": "pt1", 00:11:32.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.102 "is_configured": true, 00:11:32.102 "data_offset": 2048, 00:11:32.102 "data_size": 63488 00:11:32.102 }, 00:11:32.102 { 00:11:32.102 "name": "pt2", 00:11:32.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.102 "is_configured": true, 00:11:32.102 "data_offset": 2048, 00:11:32.102 "data_size": 63488 00:11:32.102 }, 00:11:32.102 { 00:11:32.102 "name": "pt3", 00:11:32.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.102 "is_configured": true, 00:11:32.102 "data_offset": 2048, 00:11:32.103 "data_size": 63488 00:11:32.103 }, 00:11:32.103 { 00:11:32.103 "name": "pt4", 00:11:32.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:32.103 "is_configured": true, 00:11:32.103 "data_offset": 2048, 00:11:32.103 "data_size": 63488 00:11:32.103 } 00:11:32.103 ] 00:11:32.103 }' 00:11:32.103 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.103 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 [2024-10-25 17:52:50.750100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.362 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.362 "name": "raid_bdev1", 00:11:32.362 "aliases": [ 00:11:32.362 "16e121d0-6c46-403c-a5e3-3e6d4231f12c" 00:11:32.362 ], 00:11:32.362 "product_name": "Raid Volume", 00:11:32.362 "block_size": 512, 00:11:32.362 "num_blocks": 63488, 00:11:32.362 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:32.362 "assigned_rate_limits": { 00:11:32.362 "rw_ios_per_sec": 0, 00:11:32.363 "rw_mbytes_per_sec": 0, 00:11:32.363 "r_mbytes_per_sec": 0, 00:11:32.363 "w_mbytes_per_sec": 0 00:11:32.363 }, 00:11:32.363 "claimed": false, 00:11:32.363 "zoned": false, 00:11:32.363 "supported_io_types": { 00:11:32.363 "read": true, 00:11:32.363 "write": true, 00:11:32.363 "unmap": false, 00:11:32.363 "flush": false, 00:11:32.363 "reset": true, 00:11:32.363 "nvme_admin": false, 00:11:32.363 "nvme_io": false, 00:11:32.363 "nvme_io_md": false, 00:11:32.363 "write_zeroes": true, 00:11:32.363 "zcopy": false, 00:11:32.363 "get_zone_info": false, 00:11:32.363 "zone_management": false, 00:11:32.363 "zone_append": false, 00:11:32.363 "compare": false, 00:11:32.363 "compare_and_write": false, 00:11:32.363 "abort": false, 00:11:32.363 "seek_hole": false, 00:11:32.363 "seek_data": false, 00:11:32.363 "copy": false, 00:11:32.363 "nvme_iov_md": false 00:11:32.363 }, 00:11:32.363 "memory_domains": [ 00:11:32.363 { 00:11:32.363 "dma_device_id": "system", 00:11:32.363 "dma_device_type": 1 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.363 "dma_device_type": 2 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "system", 00:11:32.363 "dma_device_type": 1 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.363 "dma_device_type": 2 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "system", 00:11:32.363 "dma_device_type": 1 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.363 "dma_device_type": 2 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "system", 00:11:32.363 "dma_device_type": 1 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.363 "dma_device_type": 2 00:11:32.363 } 00:11:32.363 ], 00:11:32.363 "driver_specific": { 00:11:32.363 "raid": { 00:11:32.363 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:32.363 "strip_size_kb": 0, 00:11:32.363 "state": "online", 00:11:32.363 "raid_level": "raid1", 00:11:32.363 "superblock": true, 00:11:32.363 "num_base_bdevs": 4, 00:11:32.363 "num_base_bdevs_discovered": 4, 00:11:32.363 "num_base_bdevs_operational": 4, 00:11:32.363 "base_bdevs_list": [ 00:11:32.363 { 00:11:32.363 "name": "pt1", 00:11:32.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.363 "is_configured": true, 00:11:32.363 "data_offset": 2048, 00:11:32.363 "data_size": 63488 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "name": "pt2", 00:11:32.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.363 "is_configured": true, 00:11:32.363 "data_offset": 2048, 00:11:32.363 "data_size": 63488 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "name": "pt3", 00:11:32.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.363 "is_configured": true, 00:11:32.363 "data_offset": 2048, 00:11:32.363 "data_size": 63488 00:11:32.363 }, 00:11:32.363 { 00:11:32.363 "name": "pt4", 00:11:32.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:32.363 "is_configured": true, 00:11:32.363 "data_offset": 2048, 00:11:32.363 "data_size": 63488 00:11:32.363 } 00:11:32.363 ] 00:11:32.363 } 00:11:32.363 } 00:11:32.363 }' 00:11:32.363 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:32.626 pt2 00:11:32.626 pt3 00:11:32.626 pt4' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.626 17:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:32.892 [2024-10-25 17:52:51.061611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.892 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.892 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=16e121d0-6c46-403c-a5e3-3e6d4231f12c 00:11:32.892 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 16e121d0-6c46-403c-a5e3-3e6d4231f12c ']' 00:11:32.892 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 [2024-10-25 17:52:51.109149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.893 [2024-10-25 17:52:51.109203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.893 [2024-10-25 17:52:51.109340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.893 [2024-10-25 17:52:51.109441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.893 [2024-10-25 17:52:51.109474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 [2024-10-25 17:52:51.256931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:32.893 [2024-10-25 17:52:51.259278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:32.893 [2024-10-25 17:52:51.259340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:32.893 [2024-10-25 17:52:51.259376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:32.893 [2024-10-25 17:52:51.259436] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:32.893 [2024-10-25 17:52:51.259506] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:32.893 [2024-10-25 17:52:51.259528] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:32.893 [2024-10-25 17:52:51.259549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:32.893 [2024-10-25 17:52:51.259565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.893 [2024-10-25 17:52:51.259577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:32.893 request: 00:11:32.893 { 00:11:32.893 "name": "raid_bdev1", 00:11:32.893 "raid_level": "raid1", 00:11:32.893 "base_bdevs": [ 00:11:32.893 "malloc1", 00:11:32.893 "malloc2", 00:11:32.893 "malloc3", 00:11:32.893 "malloc4" 00:11:32.893 ], 00:11:32.893 "superblock": false, 00:11:32.893 "method": "bdev_raid_create", 00:11:32.893 "req_id": 1 00:11:32.893 } 00:11:32.893 Got JSON-RPC error response 00:11:32.893 response: 00:11:32.893 { 00:11:32.893 "code": -17, 00:11:32.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:32.893 } 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 [2024-10-25 17:52:51.316791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.893 [2024-10-25 17:52:51.316909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.893 [2024-10-25 17:52:51.316934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:32.893 [2024-10-25 17:52:51.316946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.893 [2024-10-25 17:52:51.319502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.893 [2024-10-25 17:52:51.319552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.893 [2024-10-25 17:52:51.319662] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:32.893 [2024-10-25 17:52:51.319727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.893 pt1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.893 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.153 "name": "raid_bdev1", 00:11:33.153 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:33.153 "strip_size_kb": 0, 00:11:33.153 "state": "configuring", 00:11:33.153 "raid_level": "raid1", 00:11:33.153 "superblock": true, 00:11:33.153 "num_base_bdevs": 4, 00:11:33.153 "num_base_bdevs_discovered": 1, 00:11:33.153 "num_base_bdevs_operational": 4, 00:11:33.153 "base_bdevs_list": [ 00:11:33.153 { 00:11:33.153 "name": "pt1", 00:11:33.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.153 "is_configured": true, 00:11:33.153 "data_offset": 2048, 00:11:33.153 "data_size": 63488 00:11:33.153 }, 00:11:33.153 { 00:11:33.153 "name": null, 00:11:33.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.153 "is_configured": false, 00:11:33.153 "data_offset": 2048, 00:11:33.153 "data_size": 63488 00:11:33.153 }, 00:11:33.153 { 00:11:33.153 "name": null, 00:11:33.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.153 "is_configured": false, 00:11:33.153 "data_offset": 2048, 00:11:33.153 "data_size": 63488 00:11:33.153 }, 00:11:33.153 { 00:11:33.153 "name": null, 00:11:33.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.153 "is_configured": false, 00:11:33.153 "data_offset": 2048, 00:11:33.153 "data_size": 63488 00:11:33.153 } 00:11:33.153 ] 00:11:33.153 }' 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.153 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.413 [2024-10-25 17:52:51.756087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.413 [2024-10-25 17:52:51.756223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.413 [2024-10-25 17:52:51.756251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:33.413 [2024-10-25 17:52:51.756265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.413 [2024-10-25 17:52:51.756802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.413 [2024-10-25 17:52:51.756848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.413 [2024-10-25 17:52:51.756958] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.413 [2024-10-25 17:52:51.757002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.413 pt2 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.413 [2024-10-25 17:52:51.768070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.413 "name": "raid_bdev1", 00:11:33.413 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:33.413 "strip_size_kb": 0, 00:11:33.413 "state": "configuring", 00:11:33.413 "raid_level": "raid1", 00:11:33.413 "superblock": true, 00:11:33.413 "num_base_bdevs": 4, 00:11:33.413 "num_base_bdevs_discovered": 1, 00:11:33.413 "num_base_bdevs_operational": 4, 00:11:33.413 "base_bdevs_list": [ 00:11:33.413 { 00:11:33.413 "name": "pt1", 00:11:33.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.413 "is_configured": true, 00:11:33.413 "data_offset": 2048, 00:11:33.413 "data_size": 63488 00:11:33.413 }, 00:11:33.413 { 00:11:33.413 "name": null, 00:11:33.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.413 "is_configured": false, 00:11:33.413 "data_offset": 0, 00:11:33.413 "data_size": 63488 00:11:33.413 }, 00:11:33.413 { 00:11:33.413 "name": null, 00:11:33.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.413 "is_configured": false, 00:11:33.413 "data_offset": 2048, 00:11:33.413 "data_size": 63488 00:11:33.413 }, 00:11:33.413 { 00:11:33.413 "name": null, 00:11:33.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.413 "is_configured": false, 00:11:33.413 "data_offset": 2048, 00:11:33.413 "data_size": 63488 00:11:33.413 } 00:11:33.413 ] 00:11:33.413 }' 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.413 17:52:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.981 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:33.981 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.981 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.981 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.981 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.981 [2024-10-25 17:52:52.279207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.981 [2024-10-25 17:52:52.279317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.981 [2024-10-25 17:52:52.279351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:33.981 [2024-10-25 17:52:52.279364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.981 [2024-10-25 17:52:52.279944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.981 [2024-10-25 17:52:52.279971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.981 [2024-10-25 17:52:52.280082] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.982 [2024-10-25 17:52:52.280116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.982 pt2 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.982 [2024-10-25 17:52:52.287143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.982 [2024-10-25 17:52:52.287218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.982 [2024-10-25 17:52:52.287243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:33.982 [2024-10-25 17:52:52.287253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.982 [2024-10-25 17:52:52.287761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.982 [2024-10-25 17:52:52.287790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.982 [2024-10-25 17:52:52.287898] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:33.982 [2024-10-25 17:52:52.287922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.982 pt3 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.982 [2024-10-25 17:52:52.295079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.982 [2024-10-25 17:52:52.295138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.982 [2024-10-25 17:52:52.295159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:33.982 [2024-10-25 17:52:52.295168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.982 [2024-10-25 17:52:52.295664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.982 [2024-10-25 17:52:52.295692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.982 [2024-10-25 17:52:52.295775] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:33.982 [2024-10-25 17:52:52.295796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.982 [2024-10-25 17:52:52.295977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.982 [2024-10-25 17:52:52.295992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.982 [2024-10-25 17:52:52.296307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:33.982 [2024-10-25 17:52:52.296503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.982 [2024-10-25 17:52:52.296525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:33.982 [2024-10-25 17:52:52.296699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.982 pt4 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.982 "name": "raid_bdev1", 00:11:33.982 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:33.982 "strip_size_kb": 0, 00:11:33.982 "state": "online", 00:11:33.982 "raid_level": "raid1", 00:11:33.982 "superblock": true, 00:11:33.982 "num_base_bdevs": 4, 00:11:33.982 "num_base_bdevs_discovered": 4, 00:11:33.982 "num_base_bdevs_operational": 4, 00:11:33.982 "base_bdevs_list": [ 00:11:33.982 { 00:11:33.982 "name": "pt1", 00:11:33.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.982 "is_configured": true, 00:11:33.982 "data_offset": 2048, 00:11:33.982 "data_size": 63488 00:11:33.982 }, 00:11:33.982 { 00:11:33.982 "name": "pt2", 00:11:33.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.982 "is_configured": true, 00:11:33.982 "data_offset": 2048, 00:11:33.982 "data_size": 63488 00:11:33.982 }, 00:11:33.982 { 00:11:33.982 "name": "pt3", 00:11:33.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.982 "is_configured": true, 00:11:33.982 "data_offset": 2048, 00:11:33.982 "data_size": 63488 00:11:33.982 }, 00:11:33.982 { 00:11:33.982 "name": "pt4", 00:11:33.982 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.982 "is_configured": true, 00:11:33.982 "data_offset": 2048, 00:11:33.982 "data_size": 63488 00:11:33.982 } 00:11:33.982 ] 00:11:33.982 }' 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.982 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.550 [2024-10-25 17:52:52.738786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.550 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.550 "name": "raid_bdev1", 00:11:34.550 "aliases": [ 00:11:34.550 "16e121d0-6c46-403c-a5e3-3e6d4231f12c" 00:11:34.550 ], 00:11:34.550 "product_name": "Raid Volume", 00:11:34.550 "block_size": 512, 00:11:34.550 "num_blocks": 63488, 00:11:34.550 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:34.550 "assigned_rate_limits": { 00:11:34.550 "rw_ios_per_sec": 0, 00:11:34.550 "rw_mbytes_per_sec": 0, 00:11:34.550 "r_mbytes_per_sec": 0, 00:11:34.550 "w_mbytes_per_sec": 0 00:11:34.550 }, 00:11:34.550 "claimed": false, 00:11:34.550 "zoned": false, 00:11:34.550 "supported_io_types": { 00:11:34.550 "read": true, 00:11:34.550 "write": true, 00:11:34.550 "unmap": false, 00:11:34.550 "flush": false, 00:11:34.550 "reset": true, 00:11:34.550 "nvme_admin": false, 00:11:34.550 "nvme_io": false, 00:11:34.550 "nvme_io_md": false, 00:11:34.550 "write_zeroes": true, 00:11:34.550 "zcopy": false, 00:11:34.550 "get_zone_info": false, 00:11:34.550 "zone_management": false, 00:11:34.550 "zone_append": false, 00:11:34.550 "compare": false, 00:11:34.550 "compare_and_write": false, 00:11:34.550 "abort": false, 00:11:34.550 "seek_hole": false, 00:11:34.550 "seek_data": false, 00:11:34.550 "copy": false, 00:11:34.550 "nvme_iov_md": false 00:11:34.550 }, 00:11:34.550 "memory_domains": [ 00:11:34.550 { 00:11:34.550 "dma_device_id": "system", 00:11:34.550 "dma_device_type": 1 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.551 "dma_device_type": 2 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "system", 00:11:34.551 "dma_device_type": 1 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.551 "dma_device_type": 2 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "system", 00:11:34.551 "dma_device_type": 1 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.551 "dma_device_type": 2 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "system", 00:11:34.551 "dma_device_type": 1 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.551 "dma_device_type": 2 00:11:34.551 } 00:11:34.551 ], 00:11:34.551 "driver_specific": { 00:11:34.551 "raid": { 00:11:34.551 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:34.551 "strip_size_kb": 0, 00:11:34.551 "state": "online", 00:11:34.551 "raid_level": "raid1", 00:11:34.551 "superblock": true, 00:11:34.551 "num_base_bdevs": 4, 00:11:34.551 "num_base_bdevs_discovered": 4, 00:11:34.551 "num_base_bdevs_operational": 4, 00:11:34.551 "base_bdevs_list": [ 00:11:34.551 { 00:11:34.551 "name": "pt1", 00:11:34.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.551 "is_configured": true, 00:11:34.551 "data_offset": 2048, 00:11:34.551 "data_size": 63488 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "name": "pt2", 00:11:34.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.551 "is_configured": true, 00:11:34.551 "data_offset": 2048, 00:11:34.551 "data_size": 63488 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "name": "pt3", 00:11:34.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.551 "is_configured": true, 00:11:34.551 "data_offset": 2048, 00:11:34.551 "data_size": 63488 00:11:34.551 }, 00:11:34.551 { 00:11:34.551 "name": "pt4", 00:11:34.551 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.551 "is_configured": true, 00:11:34.551 "data_offset": 2048, 00:11:34.551 "data_size": 63488 00:11:34.551 } 00:11:34.551 ] 00:11:34.551 } 00:11:34.551 } 00:11:34.551 }' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.551 pt2 00:11:34.551 pt3 00:11:34.551 pt4' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.551 17:52:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:34.810 [2024-10-25 17:52:53.066227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 16e121d0-6c46-403c-a5e3-3e6d4231f12c '!=' 16e121d0-6c46-403c-a5e3-3e6d4231f12c ']' 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.810 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.811 [2024-10-25 17:52:53.113888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.811 "name": "raid_bdev1", 00:11:34.811 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:34.811 "strip_size_kb": 0, 00:11:34.811 "state": "online", 00:11:34.811 "raid_level": "raid1", 00:11:34.811 "superblock": true, 00:11:34.811 "num_base_bdevs": 4, 00:11:34.811 "num_base_bdevs_discovered": 3, 00:11:34.811 "num_base_bdevs_operational": 3, 00:11:34.811 "base_bdevs_list": [ 00:11:34.811 { 00:11:34.811 "name": null, 00:11:34.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.811 "is_configured": false, 00:11:34.811 "data_offset": 0, 00:11:34.811 "data_size": 63488 00:11:34.811 }, 00:11:34.811 { 00:11:34.811 "name": "pt2", 00:11:34.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.811 "is_configured": true, 00:11:34.811 "data_offset": 2048, 00:11:34.811 "data_size": 63488 00:11:34.811 }, 00:11:34.811 { 00:11:34.811 "name": "pt3", 00:11:34.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.811 "is_configured": true, 00:11:34.811 "data_offset": 2048, 00:11:34.811 "data_size": 63488 00:11:34.811 }, 00:11:34.811 { 00:11:34.811 "name": "pt4", 00:11:34.811 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.811 "is_configured": true, 00:11:34.811 "data_offset": 2048, 00:11:34.811 "data_size": 63488 00:11:34.811 } 00:11:34.811 ] 00:11:34.811 }' 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.811 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 [2024-10-25 17:52:53.584996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.379 [2024-10-25 17:52:53.585058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.379 [2024-10-25 17:52:53.585181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.379 [2024-10-25 17:52:53.585288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.379 [2024-10-25 17:52:53.585306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.379 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.380 [2024-10-25 17:52:53.664848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.380 [2024-10-25 17:52:53.664942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.380 [2024-10-25 17:52:53.664968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:35.380 [2024-10-25 17:52:53.664980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.380 [2024-10-25 17:52:53.667727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.380 [2024-10-25 17:52:53.667774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.380 [2024-10-25 17:52:53.667902] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.380 [2024-10-25 17:52:53.667966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.380 pt2 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.380 "name": "raid_bdev1", 00:11:35.380 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:35.380 "strip_size_kb": 0, 00:11:35.380 "state": "configuring", 00:11:35.380 "raid_level": "raid1", 00:11:35.380 "superblock": true, 00:11:35.380 "num_base_bdevs": 4, 00:11:35.380 "num_base_bdevs_discovered": 1, 00:11:35.380 "num_base_bdevs_operational": 3, 00:11:35.380 "base_bdevs_list": [ 00:11:35.380 { 00:11:35.380 "name": null, 00:11:35.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.380 "is_configured": false, 00:11:35.380 "data_offset": 2048, 00:11:35.380 "data_size": 63488 00:11:35.380 }, 00:11:35.380 { 00:11:35.380 "name": "pt2", 00:11:35.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.380 "is_configured": true, 00:11:35.380 "data_offset": 2048, 00:11:35.380 "data_size": 63488 00:11:35.380 }, 00:11:35.380 { 00:11:35.380 "name": null, 00:11:35.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.380 "is_configured": false, 00:11:35.380 "data_offset": 2048, 00:11:35.380 "data_size": 63488 00:11:35.380 }, 00:11:35.380 { 00:11:35.380 "name": null, 00:11:35.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.380 "is_configured": false, 00:11:35.380 "data_offset": 2048, 00:11:35.380 "data_size": 63488 00:11:35.380 } 00:11:35.380 ] 00:11:35.380 }' 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.380 17:52:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.949 [2024-10-25 17:52:54.124217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.949 [2024-10-25 17:52:54.124325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.949 [2024-10-25 17:52:54.124354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:35.949 [2024-10-25 17:52:54.124367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.949 [2024-10-25 17:52:54.124963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.949 [2024-10-25 17:52:54.124996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.949 [2024-10-25 17:52:54.125111] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:35.949 [2024-10-25 17:52:54.125143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.949 pt3 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.949 "name": "raid_bdev1", 00:11:35.949 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:35.949 "strip_size_kb": 0, 00:11:35.949 "state": "configuring", 00:11:35.949 "raid_level": "raid1", 00:11:35.949 "superblock": true, 00:11:35.949 "num_base_bdevs": 4, 00:11:35.949 "num_base_bdevs_discovered": 2, 00:11:35.949 "num_base_bdevs_operational": 3, 00:11:35.949 "base_bdevs_list": [ 00:11:35.949 { 00:11:35.949 "name": null, 00:11:35.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.949 "is_configured": false, 00:11:35.949 "data_offset": 2048, 00:11:35.949 "data_size": 63488 00:11:35.949 }, 00:11:35.949 { 00:11:35.949 "name": "pt2", 00:11:35.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.949 "is_configured": true, 00:11:35.949 "data_offset": 2048, 00:11:35.949 "data_size": 63488 00:11:35.949 }, 00:11:35.949 { 00:11:35.949 "name": "pt3", 00:11:35.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.949 "is_configured": true, 00:11:35.949 "data_offset": 2048, 00:11:35.949 "data_size": 63488 00:11:35.949 }, 00:11:35.949 { 00:11:35.949 "name": null, 00:11:35.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.949 "is_configured": false, 00:11:35.949 "data_offset": 2048, 00:11:35.949 "data_size": 63488 00:11:35.949 } 00:11:35.949 ] 00:11:35.949 }' 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.949 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.208 [2024-10-25 17:52:54.603397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:36.208 [2024-10-25 17:52:54.603506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.208 [2024-10-25 17:52:54.603536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:36.208 [2024-10-25 17:52:54.603546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.208 [2024-10-25 17:52:54.604106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.208 [2024-10-25 17:52:54.604144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:36.208 [2024-10-25 17:52:54.604255] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:36.208 [2024-10-25 17:52:54.604294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:36.208 [2024-10-25 17:52:54.604462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.208 [2024-10-25 17:52:54.604477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.208 [2024-10-25 17:52:54.604762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:36.208 [2024-10-25 17:52:54.604951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.208 [2024-10-25 17:52:54.604971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.208 [2024-10-25 17:52:54.605123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.208 pt4 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.208 "name": "raid_bdev1", 00:11:36.208 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:36.208 "strip_size_kb": 0, 00:11:36.208 "state": "online", 00:11:36.208 "raid_level": "raid1", 00:11:36.208 "superblock": true, 00:11:36.208 "num_base_bdevs": 4, 00:11:36.208 "num_base_bdevs_discovered": 3, 00:11:36.208 "num_base_bdevs_operational": 3, 00:11:36.208 "base_bdevs_list": [ 00:11:36.208 { 00:11:36.208 "name": null, 00:11:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.208 "is_configured": false, 00:11:36.208 "data_offset": 2048, 00:11:36.208 "data_size": 63488 00:11:36.208 }, 00:11:36.208 { 00:11:36.208 "name": "pt2", 00:11:36.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.208 "is_configured": true, 00:11:36.208 "data_offset": 2048, 00:11:36.208 "data_size": 63488 00:11:36.208 }, 00:11:36.208 { 00:11:36.208 "name": "pt3", 00:11:36.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.208 "is_configured": true, 00:11:36.208 "data_offset": 2048, 00:11:36.208 "data_size": 63488 00:11:36.208 }, 00:11:36.208 { 00:11:36.208 "name": "pt4", 00:11:36.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.208 "is_configured": true, 00:11:36.208 "data_offset": 2048, 00:11:36.208 "data_size": 63488 00:11:36.208 } 00:11:36.208 ] 00:11:36.208 }' 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.208 17:52:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.776 [2024-10-25 17:52:55.062566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.776 [2024-10-25 17:52:55.062623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.776 [2024-10-25 17:52:55.062745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.776 [2024-10-25 17:52:55.062870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.776 [2024-10-25 17:52:55.062894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.776 [2024-10-25 17:52:55.130475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.776 [2024-10-25 17:52:55.130602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.776 [2024-10-25 17:52:55.130629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:36.776 [2024-10-25 17:52:55.130645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.776 [2024-10-25 17:52:55.133816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.776 [2024-10-25 17:52:55.133902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.776 [2024-10-25 17:52:55.134040] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.776 [2024-10-25 17:52:55.134120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.776 [2024-10-25 17:52:55.134311] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:36.776 [2024-10-25 17:52:55.134332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.776 [2024-10-25 17:52:55.134353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:36.776 [2024-10-25 17:52:55.134452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.776 [2024-10-25 17:52:55.134674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.776 pt1 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.776 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.777 "name": "raid_bdev1", 00:11:36.777 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:36.777 "strip_size_kb": 0, 00:11:36.777 "state": "configuring", 00:11:36.777 "raid_level": "raid1", 00:11:36.777 "superblock": true, 00:11:36.777 "num_base_bdevs": 4, 00:11:36.777 "num_base_bdevs_discovered": 2, 00:11:36.777 "num_base_bdevs_operational": 3, 00:11:36.777 "base_bdevs_list": [ 00:11:36.777 { 00:11:36.777 "name": null, 00:11:36.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.777 "is_configured": false, 00:11:36.777 "data_offset": 2048, 00:11:36.777 "data_size": 63488 00:11:36.777 }, 00:11:36.777 { 00:11:36.777 "name": "pt2", 00:11:36.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.777 "is_configured": true, 00:11:36.777 "data_offset": 2048, 00:11:36.777 "data_size": 63488 00:11:36.777 }, 00:11:36.777 { 00:11:36.777 "name": "pt3", 00:11:36.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.777 "is_configured": true, 00:11:36.777 "data_offset": 2048, 00:11:36.777 "data_size": 63488 00:11:36.777 }, 00:11:36.777 { 00:11:36.777 "name": null, 00:11:36.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.777 "is_configured": false, 00:11:36.777 "data_offset": 2048, 00:11:36.777 "data_size": 63488 00:11:36.777 } 00:11:36.777 ] 00:11:36.777 }' 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.777 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.343 [2024-10-25 17:52:55.617845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:37.343 [2024-10-25 17:52:55.617950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.343 [2024-10-25 17:52:55.617985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:37.343 [2024-10-25 17:52:55.617998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.343 [2024-10-25 17:52:55.618618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.343 [2024-10-25 17:52:55.618658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:37.343 [2024-10-25 17:52:55.618786] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:37.343 [2024-10-25 17:52:55.618840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:37.343 [2024-10-25 17:52:55.619029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:37.343 [2024-10-25 17:52:55.619047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.343 [2024-10-25 17:52:55.619387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:37.343 [2024-10-25 17:52:55.619573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:37.343 [2024-10-25 17:52:55.619588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:37.343 [2024-10-25 17:52:55.619776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.343 pt4 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.343 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.344 "name": "raid_bdev1", 00:11:37.344 "uuid": "16e121d0-6c46-403c-a5e3-3e6d4231f12c", 00:11:37.344 "strip_size_kb": 0, 00:11:37.344 "state": "online", 00:11:37.344 "raid_level": "raid1", 00:11:37.344 "superblock": true, 00:11:37.344 "num_base_bdevs": 4, 00:11:37.344 "num_base_bdevs_discovered": 3, 00:11:37.344 "num_base_bdevs_operational": 3, 00:11:37.344 "base_bdevs_list": [ 00:11:37.344 { 00:11:37.344 "name": null, 00:11:37.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.344 "is_configured": false, 00:11:37.344 "data_offset": 2048, 00:11:37.344 "data_size": 63488 00:11:37.344 }, 00:11:37.344 { 00:11:37.344 "name": "pt2", 00:11:37.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.344 "is_configured": true, 00:11:37.344 "data_offset": 2048, 00:11:37.344 "data_size": 63488 00:11:37.344 }, 00:11:37.344 { 00:11:37.344 "name": "pt3", 00:11:37.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.344 "is_configured": true, 00:11:37.344 "data_offset": 2048, 00:11:37.344 "data_size": 63488 00:11:37.344 }, 00:11:37.344 { 00:11:37.344 "name": "pt4", 00:11:37.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.344 "is_configured": true, 00:11:37.344 "data_offset": 2048, 00:11:37.344 "data_size": 63488 00:11:37.344 } 00:11:37.344 ] 00:11:37.344 }' 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.344 17:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.910 [2024-10-25 17:52:56.125397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 16e121d0-6c46-403c-a5e3-3e6d4231f12c '!=' 16e121d0-6c46-403c-a5e3-3e6d4231f12c ']' 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74254 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74254 ']' 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74254 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:37.910 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74254 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74254' 00:11:37.911 killing process with pid 74254 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74254 00:11:37.911 17:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74254 00:11:37.911 [2024-10-25 17:52:56.209068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.911 [2024-10-25 17:52:56.209225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.911 [2024-10-25 17:52:56.209342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.911 [2024-10-25 17:52:56.209358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:38.478 [2024-10-25 17:52:56.731481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.853 17:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.853 00:11:39.853 real 0m9.115s 00:11:39.853 user 0m13.996s 00:11:39.853 sys 0m1.681s 00:11:39.853 17:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.853 17:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 ************************************ 00:11:39.853 END TEST raid_superblock_test 00:11:39.853 ************************************ 00:11:39.853 17:52:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:39.853 17:52:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:39.853 17:52:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.853 17:52:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 ************************************ 00:11:39.853 START TEST raid_read_error_test 00:11:39.853 ************************************ 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Bh09ZyOjS7 00:11:39.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74751 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74751 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74751 ']' 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 17:52:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.112 [2024-10-25 17:52:58.327607] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:40.112 [2024-10-25 17:52:58.327911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74751 ] 00:11:40.112 [2024-10-25 17:52:58.499710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.370 [2024-10-25 17:52:58.657415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.629 [2024-10-25 17:52:58.936370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.629 [2024-10-25 17:52:58.936606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.889 BaseBdev1_malloc 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.889 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 true 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 [2024-10-25 17:52:59.334847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.148 [2024-10-25 17:52:59.335049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.148 [2024-10-25 17:52:59.335090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.148 [2024-10-25 17:52:59.335108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.148 [2024-10-25 17:52:59.338283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.148 [2024-10-25 17:52:59.338430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.148 BaseBdev1 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 BaseBdev2_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 true 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 [2024-10-25 17:52:59.408363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.148 [2024-10-25 17:52:59.408474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.148 [2024-10-25 17:52:59.408503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.148 [2024-10-25 17:52:59.408519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.148 [2024-10-25 17:52:59.411597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.148 [2024-10-25 17:52:59.411671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.148 BaseBdev2 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 BaseBdev3_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 true 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 [2024-10-25 17:52:59.505862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.148 [2024-10-25 17:52:59.505983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.148 [2024-10-25 17:52:59.506016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.148 [2024-10-25 17:52:59.506031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.148 [2024-10-25 17:52:59.509247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.148 [2024-10-25 17:52:59.509323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.148 BaseBdev3 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 BaseBdev4_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.148 true 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.148 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.407 [2024-10-25 17:52:59.588702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:41.407 [2024-10-25 17:52:59.588964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.407 [2024-10-25 17:52:59.589004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.407 [2024-10-25 17:52:59.589019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.407 [2024-10-25 17:52:59.592205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.407 [2024-10-25 17:52:59.592402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:41.407 BaseBdev4 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.407 [2024-10-25 17:52:59.600824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.407 [2024-10-25 17:52:59.603615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.407 [2024-10-25 17:52:59.603866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.407 [2024-10-25 17:52:59.603974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.407 [2024-10-25 17:52:59.604345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:41.407 [2024-10-25 17:52:59.604366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.407 [2024-10-25 17:52:59.604765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:41.407 [2024-10-25 17:52:59.605027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:41.407 [2024-10-25 17:52:59.605039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:41.407 [2024-10-25 17:52:59.605379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.407 "name": "raid_bdev1", 00:11:41.407 "uuid": "a42c49fa-a0cc-4340-80ae-31dc6d0bee1a", 00:11:41.407 "strip_size_kb": 0, 00:11:41.407 "state": "online", 00:11:41.407 "raid_level": "raid1", 00:11:41.407 "superblock": true, 00:11:41.407 "num_base_bdevs": 4, 00:11:41.407 "num_base_bdevs_discovered": 4, 00:11:41.407 "num_base_bdevs_operational": 4, 00:11:41.407 "base_bdevs_list": [ 00:11:41.407 { 00:11:41.407 "name": "BaseBdev1", 00:11:41.407 "uuid": "29a68999-bb2b-5267-853a-eb2509143008", 00:11:41.407 "is_configured": true, 00:11:41.407 "data_offset": 2048, 00:11:41.407 "data_size": 63488 00:11:41.407 }, 00:11:41.407 { 00:11:41.407 "name": "BaseBdev2", 00:11:41.407 "uuid": "7a56d345-990e-5e49-9445-7b28d1a07506", 00:11:41.407 "is_configured": true, 00:11:41.407 "data_offset": 2048, 00:11:41.407 "data_size": 63488 00:11:41.407 }, 00:11:41.407 { 00:11:41.407 "name": "BaseBdev3", 00:11:41.407 "uuid": "5b7260e7-0399-587a-b84f-99baca0bfc63", 00:11:41.407 "is_configured": true, 00:11:41.407 "data_offset": 2048, 00:11:41.407 "data_size": 63488 00:11:41.407 }, 00:11:41.407 { 00:11:41.407 "name": "BaseBdev4", 00:11:41.407 "uuid": "c0e67d84-bbad-5a79-9147-1eaa20b58c7c", 00:11:41.407 "is_configured": true, 00:11:41.407 "data_offset": 2048, 00:11:41.407 "data_size": 63488 00:11:41.407 } 00:11:41.407 ] 00:11:41.407 }' 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.407 17:52:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.665 17:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.665 17:53:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.922 [2024-10-25 17:53:00.158699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.863 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.864 "name": "raid_bdev1", 00:11:42.864 "uuid": "a42c49fa-a0cc-4340-80ae-31dc6d0bee1a", 00:11:42.864 "strip_size_kb": 0, 00:11:42.864 "state": "online", 00:11:42.864 "raid_level": "raid1", 00:11:42.864 "superblock": true, 00:11:42.864 "num_base_bdevs": 4, 00:11:42.864 "num_base_bdevs_discovered": 4, 00:11:42.864 "num_base_bdevs_operational": 4, 00:11:42.864 "base_bdevs_list": [ 00:11:42.864 { 00:11:42.864 "name": "BaseBdev1", 00:11:42.864 "uuid": "29a68999-bb2b-5267-853a-eb2509143008", 00:11:42.864 "is_configured": true, 00:11:42.864 "data_offset": 2048, 00:11:42.864 "data_size": 63488 00:11:42.864 }, 00:11:42.864 { 00:11:42.864 "name": "BaseBdev2", 00:11:42.864 "uuid": "7a56d345-990e-5e49-9445-7b28d1a07506", 00:11:42.864 "is_configured": true, 00:11:42.864 "data_offset": 2048, 00:11:42.864 "data_size": 63488 00:11:42.864 }, 00:11:42.864 { 00:11:42.864 "name": "BaseBdev3", 00:11:42.864 "uuid": "5b7260e7-0399-587a-b84f-99baca0bfc63", 00:11:42.864 "is_configured": true, 00:11:42.864 "data_offset": 2048, 00:11:42.864 "data_size": 63488 00:11:42.864 }, 00:11:42.864 { 00:11:42.864 "name": "BaseBdev4", 00:11:42.864 "uuid": "c0e67d84-bbad-5a79-9147-1eaa20b58c7c", 00:11:42.864 "is_configured": true, 00:11:42.864 "data_offset": 2048, 00:11:42.864 "data_size": 63488 00:11:42.864 } 00:11:42.864 ] 00:11:42.864 }' 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.864 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.123 [2024-10-25 17:53:01.528472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.123 [2024-10-25 17:53:01.528645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.123 [2024-10-25 17:53:01.532233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.123 [2024-10-25 17:53:01.532412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.123 [2024-10-25 17:53:01.532681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.123 [2024-10-25 17:53:01.532754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:43.123 { 00:11:43.123 "results": [ 00:11:43.123 { 00:11:43.123 "job": "raid_bdev1", 00:11:43.123 "core_mask": "0x1", 00:11:43.123 "workload": "randrw", 00:11:43.123 "percentage": 50, 00:11:43.123 "status": "finished", 00:11:43.123 "queue_depth": 1, 00:11:43.123 "io_size": 131072, 00:11:43.123 "runtime": 1.369873, 00:11:43.123 "iops": 6369.2035685059855, 00:11:43.123 "mibps": 796.1504460632482, 00:11:43.123 "io_failed": 0, 00:11:43.123 "io_timeout": 0, 00:11:43.123 "avg_latency_us": 153.6355807359768, 00:11:43.123 "min_latency_us": 30.63056768558952, 00:11:43.123 "max_latency_us": 2031.9021834061136 00:11:43.123 } 00:11:43.123 ], 00:11:43.123 "core_count": 1 00:11:43.123 } 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74751 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74751 ']' 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74751 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.123 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74751 00:11:43.382 killing process with pid 74751 00:11:43.382 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.382 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.382 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74751' 00:11:43.382 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74751 00:11:43.382 17:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74751 00:11:43.382 [2024-10-25 17:53:01.571986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.669 [2024-10-25 17:53:02.011367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Bh09ZyOjS7 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:45.573 00:11:45.573 real 0m5.361s 00:11:45.573 user 0m6.117s 00:11:45.573 sys 0m0.789s 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.573 17:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 ************************************ 00:11:45.573 END TEST raid_read_error_test 00:11:45.573 ************************************ 00:11:45.573 17:53:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:45.573 17:53:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:45.573 17:53:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.573 17:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 ************************************ 00:11:45.573 START TEST raid_write_error_test 00:11:45.573 ************************************ 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oo5muVgsXO 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74904 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74904 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74904 ']' 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.573 17:53:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 [2024-10-25 17:53:03.784079] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:45.573 [2024-10-25 17:53:03.784338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74904 ] 00:11:45.573 [2024-10-25 17:53:03.976069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.833 [2024-10-25 17:53:04.141585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.093 [2024-10-25 17:53:04.422637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.093 [2024-10-25 17:53:04.422717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.351 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.352 BaseBdev1_malloc 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.352 true 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.352 [2024-10-25 17:53:04.771660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.352 [2024-10-25 17:53:04.771924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.352 [2024-10-25 17:53:04.771969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.352 [2024-10-25 17:53:04.771986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.352 [2024-10-25 17:53:04.775261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.352 [2024-10-25 17:53:04.775332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.352 BaseBdev1 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.352 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.610 BaseBdev2_malloc 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.610 true 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.610 [2024-10-25 17:53:04.842640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.610 [2024-10-25 17:53:04.842867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.610 [2024-10-25 17:53:04.842935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.610 [2024-10-25 17:53:04.842978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.610 [2024-10-25 17:53:04.846757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.610 [2024-10-25 17:53:04.847013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.610 BaseBdev2 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.610 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 BaseBdev3_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 true 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 [2024-10-25 17:53:04.931438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.611 [2024-10-25 17:53:04.931711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.611 [2024-10-25 17:53:04.931756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.611 [2024-10-25 17:53:04.931773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.611 [2024-10-25 17:53:04.935073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.611 [2024-10-25 17:53:04.935152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.611 BaseBdev3 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 BaseBdev4_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 true 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 [2024-10-25 17:53:05.010335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:46.611 [2024-10-25 17:53:05.010426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.611 [2024-10-25 17:53:05.010456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:46.611 [2024-10-25 17:53:05.010471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.611 [2024-10-25 17:53:05.013496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.611 [2024-10-25 17:53:05.013651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:46.611 BaseBdev4 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 [2024-10-25 17:53:05.018519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.611 [2024-10-25 17:53:05.021285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.611 [2024-10-25 17:53:05.021444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.611 [2024-10-25 17:53:05.021579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.611 [2024-10-25 17:53:05.021969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:46.611 [2024-10-25 17:53:05.022035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.611 [2024-10-25 17:53:05.022459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:46.611 [2024-10-25 17:53:05.022739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:46.611 [2024-10-25 17:53:05.022786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:46.611 [2024-10-25 17:53:05.023212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.611 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.869 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.869 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.869 "name": "raid_bdev1", 00:11:46.869 "uuid": "11550bbb-6c6c-4146-a289-991286839889", 00:11:46.869 "strip_size_kb": 0, 00:11:46.869 "state": "online", 00:11:46.869 "raid_level": "raid1", 00:11:46.869 "superblock": true, 00:11:46.869 "num_base_bdevs": 4, 00:11:46.869 "num_base_bdevs_discovered": 4, 00:11:46.869 "num_base_bdevs_operational": 4, 00:11:46.869 "base_bdevs_list": [ 00:11:46.869 { 00:11:46.869 "name": "BaseBdev1", 00:11:46.869 "uuid": "a304d291-9851-5285-8a25-2701e90a1e84", 00:11:46.869 "is_configured": true, 00:11:46.869 "data_offset": 2048, 00:11:46.869 "data_size": 63488 00:11:46.869 }, 00:11:46.869 { 00:11:46.869 "name": "BaseBdev2", 00:11:46.869 "uuid": "1785c763-5968-5a02-98e7-6576955a10ee", 00:11:46.869 "is_configured": true, 00:11:46.869 "data_offset": 2048, 00:11:46.869 "data_size": 63488 00:11:46.869 }, 00:11:46.869 { 00:11:46.869 "name": "BaseBdev3", 00:11:46.869 "uuid": "2daef611-4f44-589d-a3b4-4aeb92d0d854", 00:11:46.869 "is_configured": true, 00:11:46.869 "data_offset": 2048, 00:11:46.869 "data_size": 63488 00:11:46.869 }, 00:11:46.869 { 00:11:46.869 "name": "BaseBdev4", 00:11:46.869 "uuid": "f9b85e06-a32b-5fdd-92dc-302277768d79", 00:11:46.869 "is_configured": true, 00:11:46.869 "data_offset": 2048, 00:11:46.869 "data_size": 63488 00:11:46.869 } 00:11:46.869 ] 00:11:46.869 }' 00:11:46.869 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.869 17:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.128 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.128 17:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.387 [2024-10-25 17:53:05.619984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.339 [2024-10-25 17:53:06.470455] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:48.339 [2024-10-25 17:53:06.470563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.339 [2024-10-25 17:53:06.470876] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.339 "name": "raid_bdev1", 00:11:48.339 "uuid": "11550bbb-6c6c-4146-a289-991286839889", 00:11:48.339 "strip_size_kb": 0, 00:11:48.339 "state": "online", 00:11:48.339 "raid_level": "raid1", 00:11:48.339 "superblock": true, 00:11:48.339 "num_base_bdevs": 4, 00:11:48.339 "num_base_bdevs_discovered": 3, 00:11:48.339 "num_base_bdevs_operational": 3, 00:11:48.339 "base_bdevs_list": [ 00:11:48.339 { 00:11:48.339 "name": null, 00:11:48.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.339 "is_configured": false, 00:11:48.339 "data_offset": 0, 00:11:48.339 "data_size": 63488 00:11:48.339 }, 00:11:48.339 { 00:11:48.339 "name": "BaseBdev2", 00:11:48.339 "uuid": "1785c763-5968-5a02-98e7-6576955a10ee", 00:11:48.339 "is_configured": true, 00:11:48.339 "data_offset": 2048, 00:11:48.339 "data_size": 63488 00:11:48.339 }, 00:11:48.339 { 00:11:48.339 "name": "BaseBdev3", 00:11:48.339 "uuid": "2daef611-4f44-589d-a3b4-4aeb92d0d854", 00:11:48.339 "is_configured": true, 00:11:48.339 "data_offset": 2048, 00:11:48.339 "data_size": 63488 00:11:48.339 }, 00:11:48.339 { 00:11:48.339 "name": "BaseBdev4", 00:11:48.339 "uuid": "f9b85e06-a32b-5fdd-92dc-302277768d79", 00:11:48.339 "is_configured": true, 00:11:48.339 "data_offset": 2048, 00:11:48.339 "data_size": 63488 00:11:48.339 } 00:11:48.339 ] 00:11:48.339 }' 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.339 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.611 [2024-10-25 17:53:06.958511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.611 [2024-10-25 17:53:06.958574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.611 { 00:11:48.611 "results": [ 00:11:48.611 { 00:11:48.611 "job": "raid_bdev1", 00:11:48.611 "core_mask": "0x1", 00:11:48.611 "workload": "randrw", 00:11:48.611 "percentage": 50, 00:11:48.611 "status": "finished", 00:11:48.611 "queue_depth": 1, 00:11:48.611 "io_size": 131072, 00:11:48.611 "runtime": 1.33836, 00:11:48.611 "iops": 7082.548791057712, 00:11:48.611 "mibps": 885.318598882214, 00:11:48.611 "io_failed": 0, 00:11:48.611 "io_timeout": 0, 00:11:48.611 "avg_latency_us": 137.8652574687047, 00:11:48.611 "min_latency_us": 29.512663755458515, 00:11:48.611 "max_latency_us": 1674.172925764192 00:11:48.611 } 00:11:48.611 ], 00:11:48.611 "core_count": 1 00:11:48.611 } 00:11:48.611 [2024-10-25 17:53:06.962162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.611 [2024-10-25 17:53:06.962252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.611 [2024-10-25 17:53:06.962466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.611 [2024-10-25 17:53:06.962481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74904 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74904 ']' 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74904 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74904 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74904' 00:11:48.611 killing process with pid 74904 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74904 00:11:48.611 17:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74904 00:11:48.611 [2024-10-25 17:53:06.998649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.181 [2024-10-25 17:53:07.432697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oo5muVgsXO 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.560 00:11:50.560 real 0m5.332s 00:11:50.560 user 0m6.177s 00:11:50.560 sys 0m0.746s 00:11:50.560 ************************************ 00:11:50.560 END TEST raid_write_error_test 00:11:50.560 ************************************ 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.560 17:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.818 17:53:09 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:50.818 17:53:09 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:50.818 17:53:09 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:50.818 17:53:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:50.818 17:53:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.818 17:53:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.818 ************************************ 00:11:50.818 START TEST raid_rebuild_test 00:11:50.818 ************************************ 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75053 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75053 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75053 ']' 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.818 17:53:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.818 [2024-10-25 17:53:09.136039] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:11:50.818 [2024-10-25 17:53:09.136304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:50.818 Zero copy mechanism will not be used. 00:11:50.818 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75053 ] 00:11:51.076 [2024-10-25 17:53:09.304681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.076 [2024-10-25 17:53:09.481028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.334 [2024-10-25 17:53:09.763169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.334 [2024-10-25 17:53:09.763281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.903 BaseBdev1_malloc 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.903 [2024-10-25 17:53:10.131395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.903 [2024-10-25 17:53:10.131552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.903 [2024-10-25 17:53:10.131590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.903 [2024-10-25 17:53:10.131607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.903 [2024-10-25 17:53:10.134682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.903 [2024-10-25 17:53:10.134876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.903 BaseBdev1 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.903 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.904 BaseBdev2_malloc 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.904 [2024-10-25 17:53:10.206105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.904 [2024-10-25 17:53:10.206389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.904 [2024-10-25 17:53:10.206430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.904 [2024-10-25 17:53:10.206446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.904 [2024-10-25 17:53:10.209660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.904 [2024-10-25 17:53:10.209860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.904 BaseBdev2 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.904 spare_malloc 00:11:51.904 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.905 spare_delay 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.905 [2024-10-25 17:53:10.297624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.905 [2024-10-25 17:53:10.297761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.905 [2024-10-25 17:53:10.297800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:51.905 [2024-10-25 17:53:10.297818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.905 [2024-10-25 17:53:10.301097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.905 [2024-10-25 17:53:10.301180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.905 spare 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.905 [2024-10-25 17:53:10.309742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.905 [2024-10-25 17:53:10.312518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.905 [2024-10-25 17:53:10.312819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:51.905 [2024-10-25 17:53:10.312857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:51.905 [2024-10-25 17:53:10.313266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:51.905 [2024-10-25 17:53:10.313494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:51.905 [2024-10-25 17:53:10.313508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:51.905 [2024-10-25 17:53:10.313821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.905 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.166 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.166 "name": "raid_bdev1", 00:11:52.166 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:11:52.166 "strip_size_kb": 0, 00:11:52.166 "state": "online", 00:11:52.166 "raid_level": "raid1", 00:11:52.166 "superblock": false, 00:11:52.166 "num_base_bdevs": 2, 00:11:52.166 "num_base_bdevs_discovered": 2, 00:11:52.166 "num_base_bdevs_operational": 2, 00:11:52.166 "base_bdevs_list": [ 00:11:52.166 { 00:11:52.166 "name": "BaseBdev1", 00:11:52.166 "uuid": "46d0eab4-3fae-53b6-bdba-902d207f6504", 00:11:52.166 "is_configured": true, 00:11:52.166 "data_offset": 0, 00:11:52.166 "data_size": 65536 00:11:52.166 }, 00:11:52.166 { 00:11:52.166 "name": "BaseBdev2", 00:11:52.166 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:11:52.166 "is_configured": true, 00:11:52.166 "data_offset": 0, 00:11:52.166 "data_size": 65536 00:11:52.166 } 00:11:52.166 ] 00:11:52.166 }' 00:11:52.166 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.166 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 [2024-10-25 17:53:10.797436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 17:53:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.683 17:53:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.959 [2024-10-25 17:53:11.128573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:52.959 /dev/nbd0 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.959 1+0 records in 00:11:52.959 1+0 records out 00:11:52.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352628 s, 11.6 MB/s 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.959 17:53:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:59.567 65536+0 records in 00:11:59.567 65536+0 records out 00:11:59.567 33554432 bytes (34 MB, 32 MiB) copied, 5.9997 s, 5.6 MB/s 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.567 [2024-10-25 17:53:17.499923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 [2024-10-25 17:53:17.512081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.567 "name": "raid_bdev1", 00:11:59.567 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:11:59.567 "strip_size_kb": 0, 00:11:59.567 "state": "online", 00:11:59.567 "raid_level": "raid1", 00:11:59.567 "superblock": false, 00:11:59.567 "num_base_bdevs": 2, 00:11:59.567 "num_base_bdevs_discovered": 1, 00:11:59.567 "num_base_bdevs_operational": 1, 00:11:59.567 "base_bdevs_list": [ 00:11:59.567 { 00:11:59.567 "name": null, 00:11:59.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.567 "is_configured": false, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 }, 00:11:59.567 { 00:11:59.567 "name": "BaseBdev2", 00:11:59.567 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:11:59.567 "is_configured": true, 00:11:59.567 "data_offset": 0, 00:11:59.567 "data_size": 65536 00:11:59.567 } 00:11:59.567 ] 00:11:59.567 }' 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.567 17:53:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 [2024-10-25 17:53:17.999605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.826 [2024-10-25 17:53:18.020657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:59.827 17:53:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.827 17:53:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:59.827 [2024-10-25 17:53:18.023052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.764 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.765 "name": "raid_bdev1", 00:12:00.765 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:00.765 "strip_size_kb": 0, 00:12:00.765 "state": "online", 00:12:00.765 "raid_level": "raid1", 00:12:00.765 "superblock": false, 00:12:00.765 "num_base_bdevs": 2, 00:12:00.765 "num_base_bdevs_discovered": 2, 00:12:00.765 "num_base_bdevs_operational": 2, 00:12:00.765 "process": { 00:12:00.765 "type": "rebuild", 00:12:00.765 "target": "spare", 00:12:00.765 "progress": { 00:12:00.765 "blocks": 20480, 00:12:00.765 "percent": 31 00:12:00.765 } 00:12:00.765 }, 00:12:00.765 "base_bdevs_list": [ 00:12:00.765 { 00:12:00.765 "name": "spare", 00:12:00.765 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:00.765 "is_configured": true, 00:12:00.765 "data_offset": 0, 00:12:00.765 "data_size": 65536 00:12:00.765 }, 00:12:00.765 { 00:12:00.765 "name": "BaseBdev2", 00:12:00.765 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:00.765 "is_configured": true, 00:12:00.765 "data_offset": 0, 00:12:00.765 "data_size": 65536 00:12:00.765 } 00:12:00.765 ] 00:12:00.765 }' 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.765 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.765 [2024-10-25 17:53:19.171185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.026 [2024-10-25 17:53:19.234309] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.026 [2024-10-25 17:53:19.234650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.026 [2024-10-25 17:53:19.234685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.026 [2024-10-25 17:53:19.234726] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.026 "name": "raid_bdev1", 00:12:01.026 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:01.026 "strip_size_kb": 0, 00:12:01.026 "state": "online", 00:12:01.026 "raid_level": "raid1", 00:12:01.026 "superblock": false, 00:12:01.026 "num_base_bdevs": 2, 00:12:01.026 "num_base_bdevs_discovered": 1, 00:12:01.026 "num_base_bdevs_operational": 1, 00:12:01.026 "base_bdevs_list": [ 00:12:01.026 { 00:12:01.026 "name": null, 00:12:01.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.026 "is_configured": false, 00:12:01.026 "data_offset": 0, 00:12:01.026 "data_size": 65536 00:12:01.026 }, 00:12:01.026 { 00:12:01.026 "name": "BaseBdev2", 00:12:01.026 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:01.026 "is_configured": true, 00:12:01.026 "data_offset": 0, 00:12:01.026 "data_size": 65536 00:12:01.026 } 00:12:01.026 ] 00:12:01.026 }' 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.026 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.286 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.546 "name": "raid_bdev1", 00:12:01.546 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:01.546 "strip_size_kb": 0, 00:12:01.546 "state": "online", 00:12:01.546 "raid_level": "raid1", 00:12:01.546 "superblock": false, 00:12:01.546 "num_base_bdevs": 2, 00:12:01.546 "num_base_bdevs_discovered": 1, 00:12:01.546 "num_base_bdevs_operational": 1, 00:12:01.546 "base_bdevs_list": [ 00:12:01.546 { 00:12:01.546 "name": null, 00:12:01.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.546 "is_configured": false, 00:12:01.546 "data_offset": 0, 00:12:01.546 "data_size": 65536 00:12:01.546 }, 00:12:01.546 { 00:12:01.546 "name": "BaseBdev2", 00:12:01.546 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:01.546 "is_configured": true, 00:12:01.546 "data_offset": 0, 00:12:01.546 "data_size": 65536 00:12:01.546 } 00:12:01.546 ] 00:12:01.546 }' 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.546 [2024-10-25 17:53:19.870532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.546 [2024-10-25 17:53:19.892374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.546 17:53:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:01.546 [2024-10-25 17:53:19.895018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.485 17:53:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.746 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.746 "name": "raid_bdev1", 00:12:02.746 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:02.746 "strip_size_kb": 0, 00:12:02.746 "state": "online", 00:12:02.746 "raid_level": "raid1", 00:12:02.746 "superblock": false, 00:12:02.746 "num_base_bdevs": 2, 00:12:02.746 "num_base_bdevs_discovered": 2, 00:12:02.746 "num_base_bdevs_operational": 2, 00:12:02.746 "process": { 00:12:02.746 "type": "rebuild", 00:12:02.746 "target": "spare", 00:12:02.746 "progress": { 00:12:02.746 "blocks": 18432, 00:12:02.746 "percent": 28 00:12:02.746 } 00:12:02.746 }, 00:12:02.746 "base_bdevs_list": [ 00:12:02.746 { 00:12:02.746 "name": "spare", 00:12:02.746 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:02.746 "is_configured": true, 00:12:02.746 "data_offset": 0, 00:12:02.746 "data_size": 65536 00:12:02.746 }, 00:12:02.746 { 00:12:02.746 "name": "BaseBdev2", 00:12:02.746 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:02.746 "is_configured": true, 00:12:02.746 "data_offset": 0, 00:12:02.746 "data_size": 65536 00:12:02.746 } 00:12:02.746 ] 00:12:02.746 }' 00:12:02.746 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.746 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.746 17:53:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.746 "name": "raid_bdev1", 00:12:02.746 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:02.746 "strip_size_kb": 0, 00:12:02.746 "state": "online", 00:12:02.746 "raid_level": "raid1", 00:12:02.746 "superblock": false, 00:12:02.746 "num_base_bdevs": 2, 00:12:02.746 "num_base_bdevs_discovered": 2, 00:12:02.746 "num_base_bdevs_operational": 2, 00:12:02.746 "process": { 00:12:02.746 "type": "rebuild", 00:12:02.746 "target": "spare", 00:12:02.746 "progress": { 00:12:02.746 "blocks": 22528, 00:12:02.746 "percent": 34 00:12:02.746 } 00:12:02.746 }, 00:12:02.746 "base_bdevs_list": [ 00:12:02.746 { 00:12:02.746 "name": "spare", 00:12:02.746 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:02.746 "is_configured": true, 00:12:02.746 "data_offset": 0, 00:12:02.746 "data_size": 65536 00:12:02.746 }, 00:12:02.746 { 00:12:02.746 "name": "BaseBdev2", 00:12:02.746 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:02.746 "is_configured": true, 00:12:02.746 "data_offset": 0, 00:12:02.746 "data_size": 65536 00:12:02.746 } 00:12:02.746 ] 00:12:02.746 }' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.746 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.747 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.747 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.747 17:53:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.181 "name": "raid_bdev1", 00:12:04.181 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:04.181 "strip_size_kb": 0, 00:12:04.181 "state": "online", 00:12:04.181 "raid_level": "raid1", 00:12:04.181 "superblock": false, 00:12:04.181 "num_base_bdevs": 2, 00:12:04.181 "num_base_bdevs_discovered": 2, 00:12:04.181 "num_base_bdevs_operational": 2, 00:12:04.181 "process": { 00:12:04.181 "type": "rebuild", 00:12:04.181 "target": "spare", 00:12:04.181 "progress": { 00:12:04.181 "blocks": 45056, 00:12:04.181 "percent": 68 00:12:04.181 } 00:12:04.181 }, 00:12:04.181 "base_bdevs_list": [ 00:12:04.181 { 00:12:04.181 "name": "spare", 00:12:04.181 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:04.181 "is_configured": true, 00:12:04.181 "data_offset": 0, 00:12:04.181 "data_size": 65536 00:12:04.181 }, 00:12:04.181 { 00:12:04.181 "name": "BaseBdev2", 00:12:04.181 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:04.181 "is_configured": true, 00:12:04.181 "data_offset": 0, 00:12:04.181 "data_size": 65536 00:12:04.181 } 00:12:04.181 ] 00:12:04.181 }' 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.181 17:53:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.751 [2024-10-25 17:53:23.124979] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:04.751 [2024-10-25 17:53:23.125237] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:04.751 [2024-10-25 17:53:23.125314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.011 "name": "raid_bdev1", 00:12:05.011 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:05.011 "strip_size_kb": 0, 00:12:05.011 "state": "online", 00:12:05.011 "raid_level": "raid1", 00:12:05.011 "superblock": false, 00:12:05.011 "num_base_bdevs": 2, 00:12:05.011 "num_base_bdevs_discovered": 2, 00:12:05.011 "num_base_bdevs_operational": 2, 00:12:05.011 "base_bdevs_list": [ 00:12:05.011 { 00:12:05.011 "name": "spare", 00:12:05.011 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:05.011 "is_configured": true, 00:12:05.011 "data_offset": 0, 00:12:05.011 "data_size": 65536 00:12:05.011 }, 00:12:05.011 { 00:12:05.011 "name": "BaseBdev2", 00:12:05.011 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:05.011 "is_configured": true, 00:12:05.011 "data_offset": 0, 00:12:05.011 "data_size": 65536 00:12:05.011 } 00:12:05.011 ] 00:12:05.011 }' 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:05.011 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.272 "name": "raid_bdev1", 00:12:05.272 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:05.272 "strip_size_kb": 0, 00:12:05.272 "state": "online", 00:12:05.272 "raid_level": "raid1", 00:12:05.272 "superblock": false, 00:12:05.272 "num_base_bdevs": 2, 00:12:05.272 "num_base_bdevs_discovered": 2, 00:12:05.272 "num_base_bdevs_operational": 2, 00:12:05.272 "base_bdevs_list": [ 00:12:05.272 { 00:12:05.272 "name": "spare", 00:12:05.272 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:05.272 "is_configured": true, 00:12:05.272 "data_offset": 0, 00:12:05.272 "data_size": 65536 00:12:05.272 }, 00:12:05.272 { 00:12:05.272 "name": "BaseBdev2", 00:12:05.272 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:05.272 "is_configured": true, 00:12:05.272 "data_offset": 0, 00:12:05.272 "data_size": 65536 00:12:05.272 } 00:12:05.272 ] 00:12:05.272 }' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.272 "name": "raid_bdev1", 00:12:05.272 "uuid": "a67a8582-d943-4512-ba71-597ddff8fba1", 00:12:05.272 "strip_size_kb": 0, 00:12:05.272 "state": "online", 00:12:05.272 "raid_level": "raid1", 00:12:05.272 "superblock": false, 00:12:05.272 "num_base_bdevs": 2, 00:12:05.272 "num_base_bdevs_discovered": 2, 00:12:05.272 "num_base_bdevs_operational": 2, 00:12:05.272 "base_bdevs_list": [ 00:12:05.272 { 00:12:05.272 "name": "spare", 00:12:05.272 "uuid": "17d2136b-f3e4-5fe1-b30c-caa62f209a89", 00:12:05.272 "is_configured": true, 00:12:05.272 "data_offset": 0, 00:12:05.272 "data_size": 65536 00:12:05.272 }, 00:12:05.272 { 00:12:05.272 "name": "BaseBdev2", 00:12:05.272 "uuid": "ad399b9a-8bd9-5441-aef7-2fb9f3c2eaba", 00:12:05.272 "is_configured": true, 00:12:05.272 "data_offset": 0, 00:12:05.272 "data_size": 65536 00:12:05.272 } 00:12:05.272 ] 00:12:05.272 }' 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.272 17:53:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.843 [2024-10-25 17:53:24.078438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.843 [2024-10-25 17:53:24.078507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.843 [2024-10-25 17:53:24.078638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.843 [2024-10-25 17:53:24.078730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.843 [2024-10-25 17:53:24.078743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.843 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.844 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:06.103 /dev/nbd0 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.103 1+0 records in 00:12:06.103 1+0 records out 00:12:06.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320043 s, 12.8 MB/s 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.103 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:06.362 /dev/nbd1 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.362 1+0 records in 00:12:06.362 1+0 records out 00:12:06.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348143 s, 11.8 MB/s 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.362 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.621 17:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.881 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75053 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75053 ']' 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75053 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75053 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.453 killing process with pid 75053 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75053' 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75053 00:12:07.453 Received shutdown signal, test time was about 60.000000 seconds 00:12:07.453 00:12:07.453 Latency(us) 00:12:07.453 [2024-10-25T17:53:25.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.453 [2024-10-25T17:53:25.889Z] =================================================================================================================== 00:12:07.453 [2024-10-25T17:53:25.889Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:07.453 17:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75053 00:12:07.453 [2024-10-25 17:53:25.641536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.744 [2024-10-25 17:53:26.007626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:09.124 00:12:09.124 real 0m18.311s 00:12:09.124 user 0m20.222s 00:12:09.124 sys 0m3.850s 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.124 ************************************ 00:12:09.124 END TEST raid_rebuild_test 00:12:09.124 ************************************ 00:12:09.124 17:53:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:09.124 17:53:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:09.124 17:53:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.124 17:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.124 ************************************ 00:12:09.124 START TEST raid_rebuild_test_sb 00:12:09.124 ************************************ 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.124 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75499 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75499 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75499 ']' 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.125 17:53:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.125 Zero copy mechanism will not be used. 00:12:09.125 [2024-10-25 17:53:27.489921] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:09.125 [2024-10-25 17:53:27.490072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75499 ] 00:12:09.384 [2024-10-25 17:53:27.655174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.384 [2024-10-25 17:53:27.788396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.643 [2024-10-25 17:53:28.026172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.643 [2024-10-25 17:53:28.026260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.212 BaseBdev1_malloc 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.212 [2024-10-25 17:53:28.546255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.212 [2024-10-25 17:53:28.546389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.212 [2024-10-25 17:53:28.546431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.212 [2024-10-25 17:53:28.546451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.212 [2024-10-25 17:53:28.549727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.212 [2024-10-25 17:53:28.549813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.212 BaseBdev1 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.212 BaseBdev2_malloc 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.212 [2024-10-25 17:53:28.601042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:10.212 [2024-10-25 17:53:28.601140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.212 [2024-10-25 17:53:28.601167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.212 [2024-10-25 17:53:28.601184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.212 [2024-10-25 17:53:28.603808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.212 [2024-10-25 17:53:28.603890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.212 BaseBdev2 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.212 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.472 spare_malloc 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.472 spare_delay 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.472 [2024-10-25 17:53:28.690575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.472 [2024-10-25 17:53:28.690687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.472 [2024-10-25 17:53:28.690719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.472 [2024-10-25 17:53:28.690734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.472 [2024-10-25 17:53:28.693828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.472 [2024-10-25 17:53:28.693908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.472 spare 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.472 [2024-10-25 17:53:28.698756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.472 [2024-10-25 17:53:28.701424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.472 [2024-10-25 17:53:28.701689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:10.472 [2024-10-25 17:53:28.701718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.472 [2024-10-25 17:53:28.702118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.472 [2024-10-25 17:53:28.702355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:10.472 [2024-10-25 17:53:28.702373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:10.472 [2024-10-25 17:53:28.702680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.472 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.473 "name": "raid_bdev1", 00:12:10.473 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:10.473 "strip_size_kb": 0, 00:12:10.473 "state": "online", 00:12:10.473 "raid_level": "raid1", 00:12:10.473 "superblock": true, 00:12:10.473 "num_base_bdevs": 2, 00:12:10.473 "num_base_bdevs_discovered": 2, 00:12:10.473 "num_base_bdevs_operational": 2, 00:12:10.473 "base_bdevs_list": [ 00:12:10.473 { 00:12:10.473 "name": "BaseBdev1", 00:12:10.473 "uuid": "c40e1f12-c6ed-5009-a0ac-81090b62320e", 00:12:10.473 "is_configured": true, 00:12:10.473 "data_offset": 2048, 00:12:10.473 "data_size": 63488 00:12:10.473 }, 00:12:10.473 { 00:12:10.473 "name": "BaseBdev2", 00:12:10.473 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:10.473 "is_configured": true, 00:12:10.473 "data_offset": 2048, 00:12:10.473 "data_size": 63488 00:12:10.473 } 00:12:10.473 ] 00:12:10.473 }' 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.473 17:53:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.733 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.733 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.733 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.733 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:10.733 [2024-10-25 17:53:29.166525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:10.992 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:11.252 [2024-10-25 17:53:29.505620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:11.253 /dev/nbd0 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.253 1+0 records in 00:12:11.253 1+0 records out 00:12:11.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355715 s, 11.5 MB/s 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:11.253 17:53:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:16.584 63488+0 records in 00:12:16.584 63488+0 records out 00:12:16.584 32505856 bytes (33 MB, 31 MiB) copied, 5.35485 s, 6.1 MB/s 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.584 17:53:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:16.843 [2024-10-25 17:53:35.169721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.843 [2024-10-25 17:53:35.209821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.843 "name": "raid_bdev1", 00:12:16.843 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:16.843 "strip_size_kb": 0, 00:12:16.843 "state": "online", 00:12:16.843 "raid_level": "raid1", 00:12:16.843 "superblock": true, 00:12:16.843 "num_base_bdevs": 2, 00:12:16.843 "num_base_bdevs_discovered": 1, 00:12:16.843 "num_base_bdevs_operational": 1, 00:12:16.843 "base_bdevs_list": [ 00:12:16.843 { 00:12:16.843 "name": null, 00:12:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.843 "is_configured": false, 00:12:16.843 "data_offset": 0, 00:12:16.843 "data_size": 63488 00:12:16.843 }, 00:12:16.843 { 00:12:16.843 "name": "BaseBdev2", 00:12:16.843 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:16.843 "is_configured": true, 00:12:16.843 "data_offset": 2048, 00:12:16.843 "data_size": 63488 00:12:16.843 } 00:12:16.843 ] 00:12:16.843 }' 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.843 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.413 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.413 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.413 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.413 [2024-10-25 17:53:35.629156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.413 [2024-10-25 17:53:35.652591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:17.413 17:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.413 17:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:17.413 [2024-10-25 17:53:35.655331] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.351 "name": "raid_bdev1", 00:12:18.351 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:18.351 "strip_size_kb": 0, 00:12:18.351 "state": "online", 00:12:18.351 "raid_level": "raid1", 00:12:18.351 "superblock": true, 00:12:18.351 "num_base_bdevs": 2, 00:12:18.351 "num_base_bdevs_discovered": 2, 00:12:18.351 "num_base_bdevs_operational": 2, 00:12:18.351 "process": { 00:12:18.351 "type": "rebuild", 00:12:18.351 "target": "spare", 00:12:18.351 "progress": { 00:12:18.351 "blocks": 20480, 00:12:18.351 "percent": 32 00:12:18.351 } 00:12:18.351 }, 00:12:18.351 "base_bdevs_list": [ 00:12:18.351 { 00:12:18.351 "name": "spare", 00:12:18.351 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:18.351 "is_configured": true, 00:12:18.351 "data_offset": 2048, 00:12:18.351 "data_size": 63488 00:12:18.351 }, 00:12:18.351 { 00:12:18.351 "name": "BaseBdev2", 00:12:18.351 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:18.351 "is_configured": true, 00:12:18.351 "data_offset": 2048, 00:12:18.351 "data_size": 63488 00:12:18.351 } 00:12:18.351 ] 00:12:18.351 }' 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.351 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.352 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.611 [2024-10-25 17:53:36.814884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.611 [2024-10-25 17:53:36.866811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:18.611 [2024-10-25 17:53:36.866946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.611 [2024-10-25 17:53:36.866968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.611 [2024-10-25 17:53:36.866981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.611 "name": "raid_bdev1", 00:12:18.611 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:18.611 "strip_size_kb": 0, 00:12:18.611 "state": "online", 00:12:18.611 "raid_level": "raid1", 00:12:18.611 "superblock": true, 00:12:18.611 "num_base_bdevs": 2, 00:12:18.611 "num_base_bdevs_discovered": 1, 00:12:18.611 "num_base_bdevs_operational": 1, 00:12:18.611 "base_bdevs_list": [ 00:12:18.611 { 00:12:18.611 "name": null, 00:12:18.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.611 "is_configured": false, 00:12:18.611 "data_offset": 0, 00:12:18.611 "data_size": 63488 00:12:18.611 }, 00:12:18.611 { 00:12:18.611 "name": "BaseBdev2", 00:12:18.611 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:18.611 "is_configured": true, 00:12:18.611 "data_offset": 2048, 00:12:18.611 "data_size": 63488 00:12:18.611 } 00:12:18.611 ] 00:12:18.611 }' 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.611 17:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.179 "name": "raid_bdev1", 00:12:19.179 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:19.179 "strip_size_kb": 0, 00:12:19.179 "state": "online", 00:12:19.179 "raid_level": "raid1", 00:12:19.179 "superblock": true, 00:12:19.179 "num_base_bdevs": 2, 00:12:19.179 "num_base_bdevs_discovered": 1, 00:12:19.179 "num_base_bdevs_operational": 1, 00:12:19.179 "base_bdevs_list": [ 00:12:19.179 { 00:12:19.179 "name": null, 00:12:19.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.179 "is_configured": false, 00:12:19.179 "data_offset": 0, 00:12:19.179 "data_size": 63488 00:12:19.179 }, 00:12:19.179 { 00:12:19.179 "name": "BaseBdev2", 00:12:19.179 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:19.179 "is_configured": true, 00:12:19.179 "data_offset": 2048, 00:12:19.179 "data_size": 63488 00:12:19.179 } 00:12:19.179 ] 00:12:19.179 }' 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.179 [2024-10-25 17:53:37.541516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.179 [2024-10-25 17:53:37.562124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.179 17:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:19.179 [2024-10-25 17:53:37.564872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.561 "name": "raid_bdev1", 00:12:20.561 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:20.561 "strip_size_kb": 0, 00:12:20.561 "state": "online", 00:12:20.561 "raid_level": "raid1", 00:12:20.561 "superblock": true, 00:12:20.561 "num_base_bdevs": 2, 00:12:20.561 "num_base_bdevs_discovered": 2, 00:12:20.561 "num_base_bdevs_operational": 2, 00:12:20.561 "process": { 00:12:20.561 "type": "rebuild", 00:12:20.561 "target": "spare", 00:12:20.561 "progress": { 00:12:20.561 "blocks": 20480, 00:12:20.561 "percent": 32 00:12:20.561 } 00:12:20.561 }, 00:12:20.561 "base_bdevs_list": [ 00:12:20.561 { 00:12:20.561 "name": "spare", 00:12:20.561 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:20.561 "is_configured": true, 00:12:20.561 "data_offset": 2048, 00:12:20.561 "data_size": 63488 00:12:20.561 }, 00:12:20.561 { 00:12:20.561 "name": "BaseBdev2", 00:12:20.561 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:20.561 "is_configured": true, 00:12:20.561 "data_offset": 2048, 00:12:20.561 "data_size": 63488 00:12:20.561 } 00:12:20.561 ] 00:12:20.561 }' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:20.561 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.561 "name": "raid_bdev1", 00:12:20.561 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:20.561 "strip_size_kb": 0, 00:12:20.561 "state": "online", 00:12:20.561 "raid_level": "raid1", 00:12:20.561 "superblock": true, 00:12:20.561 "num_base_bdevs": 2, 00:12:20.561 "num_base_bdevs_discovered": 2, 00:12:20.561 "num_base_bdevs_operational": 2, 00:12:20.561 "process": { 00:12:20.561 "type": "rebuild", 00:12:20.561 "target": "spare", 00:12:20.561 "progress": { 00:12:20.561 "blocks": 22528, 00:12:20.561 "percent": 35 00:12:20.561 } 00:12:20.561 }, 00:12:20.561 "base_bdevs_list": [ 00:12:20.561 { 00:12:20.561 "name": "spare", 00:12:20.561 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:20.561 "is_configured": true, 00:12:20.561 "data_offset": 2048, 00:12:20.561 "data_size": 63488 00:12:20.561 }, 00:12:20.561 { 00:12:20.561 "name": "BaseBdev2", 00:12:20.561 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:20.561 "is_configured": true, 00:12:20.561 "data_offset": 2048, 00:12:20.561 "data_size": 63488 00:12:20.561 } 00:12:20.561 ] 00:12:20.561 }' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.561 17:53:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.518 "name": "raid_bdev1", 00:12:21.518 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:21.518 "strip_size_kb": 0, 00:12:21.518 "state": "online", 00:12:21.518 "raid_level": "raid1", 00:12:21.518 "superblock": true, 00:12:21.518 "num_base_bdevs": 2, 00:12:21.518 "num_base_bdevs_discovered": 2, 00:12:21.518 "num_base_bdevs_operational": 2, 00:12:21.518 "process": { 00:12:21.518 "type": "rebuild", 00:12:21.518 "target": "spare", 00:12:21.518 "progress": { 00:12:21.518 "blocks": 45056, 00:12:21.518 "percent": 70 00:12:21.518 } 00:12:21.518 }, 00:12:21.518 "base_bdevs_list": [ 00:12:21.518 { 00:12:21.518 "name": "spare", 00:12:21.518 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:21.518 "is_configured": true, 00:12:21.518 "data_offset": 2048, 00:12:21.518 "data_size": 63488 00:12:21.518 }, 00:12:21.518 { 00:12:21.518 "name": "BaseBdev2", 00:12:21.518 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:21.518 "is_configured": true, 00:12:21.518 "data_offset": 2048, 00:12:21.518 "data_size": 63488 00:12:21.518 } 00:12:21.518 ] 00:12:21.518 }' 00:12:21.518 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.776 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.776 17:53:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.776 17:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.776 17:53:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.342 [2024-10-25 17:53:40.692453] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:22.342 [2024-10-25 17:53:40.692597] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:22.342 [2024-10-25 17:53:40.692793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.600 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.859 "name": "raid_bdev1", 00:12:22.859 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:22.859 "strip_size_kb": 0, 00:12:22.859 "state": "online", 00:12:22.859 "raid_level": "raid1", 00:12:22.859 "superblock": true, 00:12:22.859 "num_base_bdevs": 2, 00:12:22.859 "num_base_bdevs_discovered": 2, 00:12:22.859 "num_base_bdevs_operational": 2, 00:12:22.859 "base_bdevs_list": [ 00:12:22.859 { 00:12:22.859 "name": "spare", 00:12:22.859 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:22.859 "is_configured": true, 00:12:22.859 "data_offset": 2048, 00:12:22.859 "data_size": 63488 00:12:22.859 }, 00:12:22.859 { 00:12:22.859 "name": "BaseBdev2", 00:12:22.859 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:22.859 "is_configured": true, 00:12:22.859 "data_offset": 2048, 00:12:22.859 "data_size": 63488 00:12:22.859 } 00:12:22.859 ] 00:12:22.859 }' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.859 "name": "raid_bdev1", 00:12:22.859 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:22.859 "strip_size_kb": 0, 00:12:22.859 "state": "online", 00:12:22.859 "raid_level": "raid1", 00:12:22.859 "superblock": true, 00:12:22.859 "num_base_bdevs": 2, 00:12:22.859 "num_base_bdevs_discovered": 2, 00:12:22.859 "num_base_bdevs_operational": 2, 00:12:22.859 "base_bdevs_list": [ 00:12:22.859 { 00:12:22.859 "name": "spare", 00:12:22.859 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:22.859 "is_configured": true, 00:12:22.859 "data_offset": 2048, 00:12:22.859 "data_size": 63488 00:12:22.859 }, 00:12:22.859 { 00:12:22.859 "name": "BaseBdev2", 00:12:22.859 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:22.859 "is_configured": true, 00:12:22.859 "data_offset": 2048, 00:12:22.859 "data_size": 63488 00:12:22.859 } 00:12:22.859 ] 00:12:22.859 }' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.859 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.118 "name": "raid_bdev1", 00:12:23.118 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:23.118 "strip_size_kb": 0, 00:12:23.118 "state": "online", 00:12:23.118 "raid_level": "raid1", 00:12:23.118 "superblock": true, 00:12:23.118 "num_base_bdevs": 2, 00:12:23.118 "num_base_bdevs_discovered": 2, 00:12:23.118 "num_base_bdevs_operational": 2, 00:12:23.118 "base_bdevs_list": [ 00:12:23.118 { 00:12:23.118 "name": "spare", 00:12:23.118 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:23.118 "is_configured": true, 00:12:23.118 "data_offset": 2048, 00:12:23.118 "data_size": 63488 00:12:23.118 }, 00:12:23.118 { 00:12:23.118 "name": "BaseBdev2", 00:12:23.118 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:23.118 "is_configured": true, 00:12:23.118 "data_offset": 2048, 00:12:23.118 "data_size": 63488 00:12:23.118 } 00:12:23.118 ] 00:12:23.118 }' 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.118 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 [2024-10-25 17:53:41.793497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.377 [2024-10-25 17:53:41.793562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.377 [2024-10-25 17:53:41.793681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.377 [2024-10-25 17:53:41.793781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.377 [2024-10-25 17:53:41.793804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:23.377 17:53:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:23.637 17:53:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:23.898 /dev/nbd0 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.898 1+0 records in 00:12:23.898 1+0 records out 00:12:23.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456194 s, 9.0 MB/s 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:23.898 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:24.157 /dev/nbd1 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.157 1+0 records in 00:12:24.157 1+0 records out 00:12:24.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299078 s, 13.7 MB/s 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:24.157 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.416 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.675 17:53:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.675 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.933 [2024-10-25 17:53:43.135664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:24.933 [2024-10-25 17:53:43.135751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.933 [2024-10-25 17:53:43.135779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:24.933 [2024-10-25 17:53:43.135794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.933 [2024-10-25 17:53:43.138446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.933 [2024-10-25 17:53:43.138492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:24.933 [2024-10-25 17:53:43.138611] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:24.933 [2024-10-25 17:53:43.138670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.933 [2024-10-25 17:53:43.138868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.933 spare 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.933 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.933 [2024-10-25 17:53:43.238805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:24.933 [2024-10-25 17:53:43.238904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.934 [2024-10-25 17:53:43.239288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:24.934 [2024-10-25 17:53:43.239509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:24.934 [2024-10-25 17:53:43.239526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:24.934 [2024-10-25 17:53:43.239755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.934 "name": "raid_bdev1", 00:12:24.934 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:24.934 "strip_size_kb": 0, 00:12:24.934 "state": "online", 00:12:24.934 "raid_level": "raid1", 00:12:24.934 "superblock": true, 00:12:24.934 "num_base_bdevs": 2, 00:12:24.934 "num_base_bdevs_discovered": 2, 00:12:24.934 "num_base_bdevs_operational": 2, 00:12:24.934 "base_bdevs_list": [ 00:12:24.934 { 00:12:24.934 "name": "spare", 00:12:24.934 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:24.934 "is_configured": true, 00:12:24.934 "data_offset": 2048, 00:12:24.934 "data_size": 63488 00:12:24.934 }, 00:12:24.934 { 00:12:24.934 "name": "BaseBdev2", 00:12:24.934 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:24.934 "is_configured": true, 00:12:24.934 "data_offset": 2048, 00:12:24.934 "data_size": 63488 00:12:24.934 } 00:12:24.934 ] 00:12:24.934 }' 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.934 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.501 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.502 "name": "raid_bdev1", 00:12:25.502 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:25.502 "strip_size_kb": 0, 00:12:25.502 "state": "online", 00:12:25.502 "raid_level": "raid1", 00:12:25.502 "superblock": true, 00:12:25.502 "num_base_bdevs": 2, 00:12:25.502 "num_base_bdevs_discovered": 2, 00:12:25.502 "num_base_bdevs_operational": 2, 00:12:25.502 "base_bdevs_list": [ 00:12:25.502 { 00:12:25.502 "name": "spare", 00:12:25.502 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:25.502 "is_configured": true, 00:12:25.502 "data_offset": 2048, 00:12:25.502 "data_size": 63488 00:12:25.502 }, 00:12:25.502 { 00:12:25.502 "name": "BaseBdev2", 00:12:25.502 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:25.502 "is_configured": true, 00:12:25.502 "data_offset": 2048, 00:12:25.502 "data_size": 63488 00:12:25.502 } 00:12:25.502 ] 00:12:25.502 }' 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.502 [2024-10-25 17:53:43.902560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.502 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.761 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.761 "name": "raid_bdev1", 00:12:25.761 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:25.761 "strip_size_kb": 0, 00:12:25.761 "state": "online", 00:12:25.761 "raid_level": "raid1", 00:12:25.761 "superblock": true, 00:12:25.761 "num_base_bdevs": 2, 00:12:25.761 "num_base_bdevs_discovered": 1, 00:12:25.761 "num_base_bdevs_operational": 1, 00:12:25.761 "base_bdevs_list": [ 00:12:25.761 { 00:12:25.761 "name": null, 00:12:25.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.761 "is_configured": false, 00:12:25.761 "data_offset": 0, 00:12:25.761 "data_size": 63488 00:12:25.761 }, 00:12:25.761 { 00:12:25.761 "name": "BaseBdev2", 00:12:25.761 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:25.761 "is_configured": true, 00:12:25.761 "data_offset": 2048, 00:12:25.761 "data_size": 63488 00:12:25.761 } 00:12:25.761 ] 00:12:25.761 }' 00:12:25.761 17:53:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.761 17:53:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 17:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.020 17:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 17:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 [2024-10-25 17:53:44.369852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.020 [2024-10-25 17:53:44.370095] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:26.020 [2024-10-25 17:53:44.370122] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:26.020 [2024-10-25 17:53:44.370166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.020 [2024-10-25 17:53:44.388493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:26.020 17:53:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.020 17:53:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:26.020 [2024-10-25 17:53:44.390685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.400 "name": "raid_bdev1", 00:12:27.400 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:27.400 "strip_size_kb": 0, 00:12:27.400 "state": "online", 00:12:27.400 "raid_level": "raid1", 00:12:27.400 "superblock": true, 00:12:27.400 "num_base_bdevs": 2, 00:12:27.400 "num_base_bdevs_discovered": 2, 00:12:27.400 "num_base_bdevs_operational": 2, 00:12:27.400 "process": { 00:12:27.400 "type": "rebuild", 00:12:27.400 "target": "spare", 00:12:27.400 "progress": { 00:12:27.400 "blocks": 20480, 00:12:27.400 "percent": 32 00:12:27.400 } 00:12:27.400 }, 00:12:27.400 "base_bdevs_list": [ 00:12:27.400 { 00:12:27.400 "name": "spare", 00:12:27.400 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:27.400 "is_configured": true, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 }, 00:12:27.400 { 00:12:27.400 "name": "BaseBdev2", 00:12:27.400 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:27.400 "is_configured": true, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 } 00:12:27.400 ] 00:12:27.400 }' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.400 [2024-10-25 17:53:45.554194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.400 [2024-10-25 17:53:45.597486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.400 [2024-10-25 17:53:45.597557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.400 [2024-10-25 17:53:45.597573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.400 [2024-10-25 17:53:45.597584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.400 "name": "raid_bdev1", 00:12:27.400 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:27.400 "strip_size_kb": 0, 00:12:27.400 "state": "online", 00:12:27.400 "raid_level": "raid1", 00:12:27.400 "superblock": true, 00:12:27.400 "num_base_bdevs": 2, 00:12:27.400 "num_base_bdevs_discovered": 1, 00:12:27.400 "num_base_bdevs_operational": 1, 00:12:27.400 "base_bdevs_list": [ 00:12:27.400 { 00:12:27.400 "name": null, 00:12:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.400 "is_configured": false, 00:12:27.400 "data_offset": 0, 00:12:27.400 "data_size": 63488 00:12:27.400 }, 00:12:27.400 { 00:12:27.400 "name": "BaseBdev2", 00:12:27.400 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:27.400 "is_configured": true, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 } 00:12:27.400 ] 00:12:27.400 }' 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.400 17:53:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.968 17:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:27.968 17:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.968 17:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.968 [2024-10-25 17:53:46.136282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:27.968 [2024-10-25 17:53:46.136385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.968 [2024-10-25 17:53:46.136409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:27.968 [2024-10-25 17:53:46.136420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.968 [2024-10-25 17:53:46.136926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.968 [2024-10-25 17:53:46.136957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:27.968 [2024-10-25 17:53:46.137055] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:27.968 [2024-10-25 17:53:46.137077] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:27.968 [2024-10-25 17:53:46.137088] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:27.968 [2024-10-25 17:53:46.137115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.968 [2024-10-25 17:53:46.152475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:27.968 spare 00:12:27.968 17:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.968 [2024-10-25 17:53:46.154428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.968 17:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.906 "name": "raid_bdev1", 00:12:28.906 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:28.906 "strip_size_kb": 0, 00:12:28.906 "state": "online", 00:12:28.906 "raid_level": "raid1", 00:12:28.906 "superblock": true, 00:12:28.906 "num_base_bdevs": 2, 00:12:28.906 "num_base_bdevs_discovered": 2, 00:12:28.906 "num_base_bdevs_operational": 2, 00:12:28.906 "process": { 00:12:28.906 "type": "rebuild", 00:12:28.906 "target": "spare", 00:12:28.906 "progress": { 00:12:28.906 "blocks": 20480, 00:12:28.906 "percent": 32 00:12:28.906 } 00:12:28.906 }, 00:12:28.906 "base_bdevs_list": [ 00:12:28.906 { 00:12:28.906 "name": "spare", 00:12:28.906 "uuid": "1fbfb67e-d051-5576-bf44-893f77959a42", 00:12:28.906 "is_configured": true, 00:12:28.906 "data_offset": 2048, 00:12:28.906 "data_size": 63488 00:12:28.906 }, 00:12:28.906 { 00:12:28.906 "name": "BaseBdev2", 00:12:28.906 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:28.906 "is_configured": true, 00:12:28.906 "data_offset": 2048, 00:12:28.906 "data_size": 63488 00:12:28.906 } 00:12:28.906 ] 00:12:28.906 }' 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.906 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.906 [2024-10-25 17:53:47.298135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.165 [2024-10-25 17:53:47.360487] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.165 [2024-10-25 17:53:47.360572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.165 [2024-10-25 17:53:47.360594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.165 [2024-10-25 17:53:47.360604] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.165 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.165 "name": "raid_bdev1", 00:12:29.165 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:29.165 "strip_size_kb": 0, 00:12:29.165 "state": "online", 00:12:29.165 "raid_level": "raid1", 00:12:29.165 "superblock": true, 00:12:29.165 "num_base_bdevs": 2, 00:12:29.165 "num_base_bdevs_discovered": 1, 00:12:29.165 "num_base_bdevs_operational": 1, 00:12:29.165 "base_bdevs_list": [ 00:12:29.165 { 00:12:29.165 "name": null, 00:12:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.165 "is_configured": false, 00:12:29.165 "data_offset": 0, 00:12:29.166 "data_size": 63488 00:12:29.166 }, 00:12:29.166 { 00:12:29.166 "name": "BaseBdev2", 00:12:29.166 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:29.166 "is_configured": true, 00:12:29.166 "data_offset": 2048, 00:12:29.166 "data_size": 63488 00:12:29.166 } 00:12:29.166 ] 00:12:29.166 }' 00:12:29.166 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.166 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.734 "name": "raid_bdev1", 00:12:29.734 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:29.734 "strip_size_kb": 0, 00:12:29.734 "state": "online", 00:12:29.734 "raid_level": "raid1", 00:12:29.734 "superblock": true, 00:12:29.734 "num_base_bdevs": 2, 00:12:29.734 "num_base_bdevs_discovered": 1, 00:12:29.734 "num_base_bdevs_operational": 1, 00:12:29.734 "base_bdevs_list": [ 00:12:29.734 { 00:12:29.734 "name": null, 00:12:29.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.734 "is_configured": false, 00:12:29.734 "data_offset": 0, 00:12:29.734 "data_size": 63488 00:12:29.734 }, 00:12:29.734 { 00:12:29.734 "name": "BaseBdev2", 00:12:29.734 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:29.734 "is_configured": true, 00:12:29.734 "data_offset": 2048, 00:12:29.734 "data_size": 63488 00:12:29.734 } 00:12:29.734 ] 00:12:29.734 }' 00:12:29.734 17:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 [2024-10-25 17:53:48.084359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:29.734 [2024-10-25 17:53:48.084446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.734 [2024-10-25 17:53:48.084471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:29.734 [2024-10-25 17:53:48.084494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.734 [2024-10-25 17:53:48.085047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.734 [2024-10-25 17:53:48.085082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.734 [2024-10-25 17:53:48.085178] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:29.734 [2024-10-25 17:53:48.085200] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:29.734 [2024-10-25 17:53:48.085212] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:29.734 [2024-10-25 17:53:48.085225] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:29.734 BaseBdev1 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 17:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.669 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.928 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.928 "name": "raid_bdev1", 00:12:30.928 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:30.928 "strip_size_kb": 0, 00:12:30.928 "state": "online", 00:12:30.928 "raid_level": "raid1", 00:12:30.928 "superblock": true, 00:12:30.928 "num_base_bdevs": 2, 00:12:30.928 "num_base_bdevs_discovered": 1, 00:12:30.928 "num_base_bdevs_operational": 1, 00:12:30.928 "base_bdevs_list": [ 00:12:30.928 { 00:12:30.928 "name": null, 00:12:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.928 "is_configured": false, 00:12:30.928 "data_offset": 0, 00:12:30.928 "data_size": 63488 00:12:30.928 }, 00:12:30.928 { 00:12:30.928 "name": "BaseBdev2", 00:12:30.928 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:30.928 "is_configured": true, 00:12:30.928 "data_offset": 2048, 00:12:30.928 "data_size": 63488 00:12:30.928 } 00:12:30.928 ] 00:12:30.928 }' 00:12:30.928 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.928 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.187 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.447 "name": "raid_bdev1", 00:12:31.447 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:31.447 "strip_size_kb": 0, 00:12:31.447 "state": "online", 00:12:31.447 "raid_level": "raid1", 00:12:31.447 "superblock": true, 00:12:31.447 "num_base_bdevs": 2, 00:12:31.447 "num_base_bdevs_discovered": 1, 00:12:31.447 "num_base_bdevs_operational": 1, 00:12:31.447 "base_bdevs_list": [ 00:12:31.447 { 00:12:31.447 "name": null, 00:12:31.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.447 "is_configured": false, 00:12:31.447 "data_offset": 0, 00:12:31.447 "data_size": 63488 00:12:31.447 }, 00:12:31.447 { 00:12:31.447 "name": "BaseBdev2", 00:12:31.447 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:31.447 "is_configured": true, 00:12:31.447 "data_offset": 2048, 00:12:31.447 "data_size": 63488 00:12:31.447 } 00:12:31.447 ] 00:12:31.447 }' 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.447 [2024-10-25 17:53:49.760407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.447 [2024-10-25 17:53:49.760602] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:31.447 [2024-10-25 17:53:49.760630] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:31.447 request: 00:12:31.447 { 00:12:31.447 "base_bdev": "BaseBdev1", 00:12:31.447 "raid_bdev": "raid_bdev1", 00:12:31.447 "method": "bdev_raid_add_base_bdev", 00:12:31.447 "req_id": 1 00:12:31.447 } 00:12:31.447 Got JSON-RPC error response 00:12:31.447 response: 00:12:31.447 { 00:12:31.447 "code": -22, 00:12:31.447 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:31.447 } 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.447 17:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.386 17:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.644 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.644 "name": "raid_bdev1", 00:12:32.644 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:32.644 "strip_size_kb": 0, 00:12:32.644 "state": "online", 00:12:32.644 "raid_level": "raid1", 00:12:32.644 "superblock": true, 00:12:32.644 "num_base_bdevs": 2, 00:12:32.644 "num_base_bdevs_discovered": 1, 00:12:32.644 "num_base_bdevs_operational": 1, 00:12:32.644 "base_bdevs_list": [ 00:12:32.644 { 00:12:32.644 "name": null, 00:12:32.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.644 "is_configured": false, 00:12:32.644 "data_offset": 0, 00:12:32.644 "data_size": 63488 00:12:32.644 }, 00:12:32.644 { 00:12:32.644 "name": "BaseBdev2", 00:12:32.644 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:32.644 "is_configured": true, 00:12:32.644 "data_offset": 2048, 00:12:32.644 "data_size": 63488 00:12:32.644 } 00:12:32.644 ] 00:12:32.644 }' 00:12:32.644 17:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.644 17:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.902 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.161 "name": "raid_bdev1", 00:12:33.161 "uuid": "6846d17c-1c15-4b6d-bb36-69dd15405bcb", 00:12:33.161 "strip_size_kb": 0, 00:12:33.161 "state": "online", 00:12:33.161 "raid_level": "raid1", 00:12:33.161 "superblock": true, 00:12:33.161 "num_base_bdevs": 2, 00:12:33.161 "num_base_bdevs_discovered": 1, 00:12:33.161 "num_base_bdevs_operational": 1, 00:12:33.161 "base_bdevs_list": [ 00:12:33.161 { 00:12:33.161 "name": null, 00:12:33.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.161 "is_configured": false, 00:12:33.161 "data_offset": 0, 00:12:33.161 "data_size": 63488 00:12:33.161 }, 00:12:33.161 { 00:12:33.161 "name": "BaseBdev2", 00:12:33.161 "uuid": "914adc29-b646-515e-8e46-673f7e0d6dba", 00:12:33.161 "is_configured": true, 00:12:33.161 "data_offset": 2048, 00:12:33.161 "data_size": 63488 00:12:33.161 } 00:12:33.161 ] 00:12:33.161 }' 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.161 17:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75499 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75499 ']' 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75499 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75499 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.162 killing process with pid 75499 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75499' 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75499 00:12:33.162 Received shutdown signal, test time was about 60.000000 seconds 00:12:33.162 00:12:33.162 Latency(us) 00:12:33.162 [2024-10-25T17:53:51.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.162 [2024-10-25T17:53:51.598Z] =================================================================================================================== 00:12:33.162 [2024-10-25T17:53:51.598Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:33.162 [2024-10-25 17:53:51.476982] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.162 17:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75499 00:12:33.162 [2024-10-25 17:53:51.477134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.162 [2024-10-25 17:53:51.477203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.162 [2024-10-25 17:53:51.477216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:33.425 [2024-10-25 17:53:51.793163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.809 17:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:34.809 00:12:34.809 real 0m25.578s 00:12:34.809 user 0m30.878s 00:12:34.809 sys 0m4.226s 00:12:34.809 17:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.809 17:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.809 ************************************ 00:12:34.809 END TEST raid_rebuild_test_sb 00:12:34.809 ************************************ 00:12:34.809 17:53:53 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:34.809 17:53:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:34.809 17:53:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.809 17:53:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.809 ************************************ 00:12:34.809 START TEST raid_rebuild_test_io 00:12:34.809 ************************************ 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76251 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76251 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76251 ']' 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.809 17:53:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.809 [2024-10-25 17:53:53.152227] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:34.809 [2024-10-25 17:53:53.152401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 00:12:34.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.809 Zero copy mechanism will not be used. 00:12:35.069 [2024-10-25 17:53:53.324579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.069 [2024-10-25 17:53:53.458912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.328 [2024-10-25 17:53:53.697024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.328 [2024-10-25 17:53:53.697068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 BaseBdev1_malloc 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 [2024-10-25 17:53:54.131882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.896 [2024-10-25 17:53:54.131977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.896 [2024-10-25 17:53:54.132006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:35.896 [2024-10-25 17:53:54.132021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.896 [2024-10-25 17:53:54.134501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.896 [2024-10-25 17:53:54.134551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.896 BaseBdev1 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 BaseBdev2_malloc 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 [2024-10-25 17:53:54.194445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:35.896 [2024-10-25 17:53:54.194524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.896 [2024-10-25 17:53:54.194547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:35.896 [2024-10-25 17:53:54.194560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.897 [2024-10-25 17:53:54.197031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.897 [2024-10-25 17:53:54.197077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.897 BaseBdev2 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 spare_malloc 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 spare_delay 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 [2024-10-25 17:53:54.280738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.897 [2024-10-25 17:53:54.280811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.897 [2024-10-25 17:53:54.280847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:35.897 [2024-10-25 17:53:54.280862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.897 [2024-10-25 17:53:54.283392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.897 [2024-10-25 17:53:54.283436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.897 spare 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 [2024-10-25 17:53:54.292773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.897 [2024-10-25 17:53:54.294875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.897 [2024-10-25 17:53:54.295009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:35.897 [2024-10-25 17:53:54.295026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:35.897 [2024-10-25 17:53:54.295327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:35.897 [2024-10-25 17:53:54.295524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:35.897 [2024-10-25 17:53:54.295545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:35.897 [2024-10-25 17:53:54.295721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.155 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.155 "name": "raid_bdev1", 00:12:36.155 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:36.155 "strip_size_kb": 0, 00:12:36.155 "state": "online", 00:12:36.155 "raid_level": "raid1", 00:12:36.155 "superblock": false, 00:12:36.155 "num_base_bdevs": 2, 00:12:36.155 "num_base_bdevs_discovered": 2, 00:12:36.155 "num_base_bdevs_operational": 2, 00:12:36.155 "base_bdevs_list": [ 00:12:36.155 { 00:12:36.155 "name": "BaseBdev1", 00:12:36.155 "uuid": "d83f26f0-c505-5f3e-bcd1-03ba72d06594", 00:12:36.155 "is_configured": true, 00:12:36.155 "data_offset": 0, 00:12:36.155 "data_size": 65536 00:12:36.155 }, 00:12:36.155 { 00:12:36.155 "name": "BaseBdev2", 00:12:36.155 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:36.155 "is_configured": true, 00:12:36.155 "data_offset": 0, 00:12:36.155 "data_size": 65536 00:12:36.155 } 00:12:36.155 ] 00:12:36.155 }' 00:12:36.155 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.155 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.414 [2024-10-25 17:53:54.768367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:36.414 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.672 [2024-10-25 17:53:54.871969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.672 "name": "raid_bdev1", 00:12:36.672 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:36.672 "strip_size_kb": 0, 00:12:36.672 "state": "online", 00:12:36.672 "raid_level": "raid1", 00:12:36.672 "superblock": false, 00:12:36.672 "num_base_bdevs": 2, 00:12:36.672 "num_base_bdevs_discovered": 1, 00:12:36.672 "num_base_bdevs_operational": 1, 00:12:36.672 "base_bdevs_list": [ 00:12:36.672 { 00:12:36.672 "name": null, 00:12:36.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.672 "is_configured": false, 00:12:36.672 "data_offset": 0, 00:12:36.672 "data_size": 65536 00:12:36.672 }, 00:12:36.672 { 00:12:36.672 "name": "BaseBdev2", 00:12:36.672 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:36.672 "is_configured": true, 00:12:36.672 "data_offset": 0, 00:12:36.672 "data_size": 65536 00:12:36.672 } 00:12:36.672 ] 00:12:36.672 }' 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.672 17:53:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.672 [2024-10-25 17:53:54.981353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:36.672 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.672 Zero copy mechanism will not be used. 00:12:36.672 Running I/O for 60 seconds... 00:12:36.931 17:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.931 17:53:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.931 17:53:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.931 [2024-10-25 17:53:55.274680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.931 17:53:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.931 17:53:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:36.931 [2024-10-25 17:53:55.346177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:36.931 [2024-10-25 17:53:55.348482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.190 [2024-10-25 17:53:55.458055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.190 [2024-10-25 17:53:55.458697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.449 [2024-10-25 17:53:55.684087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.449 [2024-10-25 17:53:55.684426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.708 [2024-10-25 17:53:55.929134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:37.708 137.00 IOPS, 411.00 MiB/s [2024-10-25T17:53:56.144Z] [2024-10-25 17:53:56.077934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.967 "name": "raid_bdev1", 00:12:37.967 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:37.967 "strip_size_kb": 0, 00:12:37.967 "state": "online", 00:12:37.967 "raid_level": "raid1", 00:12:37.967 "superblock": false, 00:12:37.967 "num_base_bdevs": 2, 00:12:37.967 "num_base_bdevs_discovered": 2, 00:12:37.967 "num_base_bdevs_operational": 2, 00:12:37.967 "process": { 00:12:37.967 "type": "rebuild", 00:12:37.967 "target": "spare", 00:12:37.967 "progress": { 00:12:37.967 "blocks": 12288, 00:12:37.967 "percent": 18 00:12:37.967 } 00:12:37.967 }, 00:12:37.967 "base_bdevs_list": [ 00:12:37.967 { 00:12:37.967 "name": "spare", 00:12:37.967 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:37.967 "is_configured": true, 00:12:37.967 "data_offset": 0, 00:12:37.967 "data_size": 65536 00:12:37.967 }, 00:12:37.967 { 00:12:37.967 "name": "BaseBdev2", 00:12:37.967 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:37.967 "is_configured": true, 00:12:37.967 "data_offset": 0, 00:12:37.967 "data_size": 65536 00:12:37.967 } 00:12:37.967 ] 00:12:37.967 }' 00:12:37.967 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.226 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.226 [2024-10-25 17:53:56.492719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.226 [2024-10-25 17:53:56.538203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:38.226 [2024-10-25 17:53:56.638086] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.226 [2024-10-25 17:53:56.641062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.226 [2024-10-25 17:53:56.641122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.226 [2024-10-25 17:53:56.641138] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.485 [2024-10-25 17:53:56.700868] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.485 "name": "raid_bdev1", 00:12:38.485 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:38.485 "strip_size_kb": 0, 00:12:38.485 "state": "online", 00:12:38.485 "raid_level": "raid1", 00:12:38.485 "superblock": false, 00:12:38.485 "num_base_bdevs": 2, 00:12:38.485 "num_base_bdevs_discovered": 1, 00:12:38.485 "num_base_bdevs_operational": 1, 00:12:38.485 "base_bdevs_list": [ 00:12:38.485 { 00:12:38.485 "name": null, 00:12:38.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.485 "is_configured": false, 00:12:38.485 "data_offset": 0, 00:12:38.485 "data_size": 65536 00:12:38.485 }, 00:12:38.485 { 00:12:38.485 "name": "BaseBdev2", 00:12:38.485 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:38.485 "is_configured": true, 00:12:38.485 "data_offset": 0, 00:12:38.485 "data_size": 65536 00:12:38.485 } 00:12:38.485 ] 00:12:38.485 }' 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.485 17:53:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.744 122.00 IOPS, 366.00 MiB/s [2024-10-25T17:53:57.180Z] 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.744 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.744 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.744 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.744 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.002 "name": "raid_bdev1", 00:12:39.002 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:39.002 "strip_size_kb": 0, 00:12:39.002 "state": "online", 00:12:39.002 "raid_level": "raid1", 00:12:39.002 "superblock": false, 00:12:39.002 "num_base_bdevs": 2, 00:12:39.002 "num_base_bdevs_discovered": 1, 00:12:39.002 "num_base_bdevs_operational": 1, 00:12:39.002 "base_bdevs_list": [ 00:12:39.002 { 00:12:39.002 "name": null, 00:12:39.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.002 "is_configured": false, 00:12:39.002 "data_offset": 0, 00:12:39.002 "data_size": 65536 00:12:39.002 }, 00:12:39.002 { 00:12:39.002 "name": "BaseBdev2", 00:12:39.002 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:39.002 "is_configured": true, 00:12:39.002 "data_offset": 0, 00:12:39.002 "data_size": 65536 00:12:39.002 } 00:12:39.002 ] 00:12:39.002 }' 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 [2024-10-25 17:53:57.334984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.002 17:53:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:39.002 [2024-10-25 17:53:57.396267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.002 [2024-10-25 17:53:57.398382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.259 [2024-10-25 17:53:57.516029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.259 [2024-10-25 17:53:57.516672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.517 [2024-10-25 17:53:57.734292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.517 [2024-10-25 17:53:57.734650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.778 [2024-10-25 17:53:57.965535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:39.778 137.00 IOPS, 411.00 MiB/s [2024-10-25T17:53:58.214Z] [2024-10-25 17:53:58.084216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:39.778 [2024-10-25 17:53:58.084586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.060 [2024-10-25 17:53:58.350313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.060 "name": "raid_bdev1", 00:12:40.060 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:40.060 "strip_size_kb": 0, 00:12:40.060 "state": "online", 00:12:40.060 "raid_level": "raid1", 00:12:40.060 "superblock": false, 00:12:40.060 "num_base_bdevs": 2, 00:12:40.060 "num_base_bdevs_discovered": 2, 00:12:40.060 "num_base_bdevs_operational": 2, 00:12:40.060 "process": { 00:12:40.060 "type": "rebuild", 00:12:40.060 "target": "spare", 00:12:40.060 "progress": { 00:12:40.060 "blocks": 14336, 00:12:40.060 "percent": 21 00:12:40.060 } 00:12:40.060 }, 00:12:40.060 "base_bdevs_list": [ 00:12:40.060 { 00:12:40.060 "name": "spare", 00:12:40.060 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:40.060 "is_configured": true, 00:12:40.060 "data_offset": 0, 00:12:40.060 "data_size": 65536 00:12:40.060 }, 00:12:40.060 { 00:12:40.060 "name": "BaseBdev2", 00:12:40.060 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:40.060 "is_configured": true, 00:12:40.060 "data_offset": 0, 00:12:40.060 "data_size": 65536 00:12:40.060 } 00:12:40.060 ] 00:12:40.060 }' 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.060 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.317 "name": "raid_bdev1", 00:12:40.317 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:40.317 "strip_size_kb": 0, 00:12:40.317 "state": "online", 00:12:40.317 "raid_level": "raid1", 00:12:40.317 "superblock": false, 00:12:40.317 "num_base_bdevs": 2, 00:12:40.317 "num_base_bdevs_discovered": 2, 00:12:40.317 "num_base_bdevs_operational": 2, 00:12:40.317 "process": { 00:12:40.317 "type": "rebuild", 00:12:40.317 "target": "spare", 00:12:40.317 "progress": { 00:12:40.317 "blocks": 14336, 00:12:40.317 "percent": 21 00:12:40.317 } 00:12:40.317 }, 00:12:40.317 "base_bdevs_list": [ 00:12:40.317 { 00:12:40.317 "name": "spare", 00:12:40.317 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:40.317 "is_configured": true, 00:12:40.317 "data_offset": 0, 00:12:40.317 "data_size": 65536 00:12:40.317 }, 00:12:40.317 { 00:12:40.317 "name": "BaseBdev2", 00:12:40.317 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:40.317 "is_configured": true, 00:12:40.317 "data_offset": 0, 00:12:40.317 "data_size": 65536 00:12:40.317 } 00:12:40.317 ] 00:12:40.317 }' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.317 [2024-10-25 17:53:58.571906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:40.317 [2024-10-25 17:53:58.572278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.317 17:53:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.574 [2024-10-25 17:53:58.911017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:40.832 114.50 IOPS, 343.50 MiB/s [2024-10-25T17:53:59.268Z] [2024-10-25 17:53:59.167587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:41.091 [2024-10-25 17:53:59.518846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:41.091 [2024-10-25 17:53:59.519361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.350 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.350 "name": "raid_bdev1", 00:12:41.350 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:41.350 "strip_size_kb": 0, 00:12:41.350 "state": "online", 00:12:41.350 "raid_level": "raid1", 00:12:41.350 "superblock": false, 00:12:41.350 "num_base_bdevs": 2, 00:12:41.350 "num_base_bdevs_discovered": 2, 00:12:41.350 "num_base_bdevs_operational": 2, 00:12:41.350 "process": { 00:12:41.350 "type": "rebuild", 00:12:41.350 "target": "spare", 00:12:41.350 "progress": { 00:12:41.350 "blocks": 26624, 00:12:41.350 "percent": 40 00:12:41.350 } 00:12:41.350 }, 00:12:41.350 "base_bdevs_list": [ 00:12:41.350 { 00:12:41.350 "name": "spare", 00:12:41.350 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:41.350 "is_configured": true, 00:12:41.350 "data_offset": 0, 00:12:41.351 "data_size": 65536 00:12:41.351 }, 00:12:41.351 { 00:12:41.351 "name": "BaseBdev2", 00:12:41.351 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:41.351 "is_configured": true, 00:12:41.351 "data_offset": 0, 00:12:41.351 "data_size": 65536 00:12:41.351 } 00:12:41.351 ] 00:12:41.351 }' 00:12:41.351 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.351 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.351 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.610 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.610 17:53:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.610 [2024-10-25 17:53:59.966423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:41.870 104.80 IOPS, 314.40 MiB/s [2024-10-25T17:54:00.306Z] [2024-10-25 17:54:00.185017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:42.130 [2024-10-25 17:54:00.407773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.389 17:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.645 17:54:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.645 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.645 "name": "raid_bdev1", 00:12:42.645 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:42.645 "strip_size_kb": 0, 00:12:42.645 "state": "online", 00:12:42.645 "raid_level": "raid1", 00:12:42.645 "superblock": false, 00:12:42.645 "num_base_bdevs": 2, 00:12:42.645 "num_base_bdevs_discovered": 2, 00:12:42.645 "num_base_bdevs_operational": 2, 00:12:42.645 "process": { 00:12:42.645 "type": "rebuild", 00:12:42.645 "target": "spare", 00:12:42.645 "progress": { 00:12:42.645 "blocks": 45056, 00:12:42.645 "percent": 68 00:12:42.645 } 00:12:42.645 }, 00:12:42.645 "base_bdevs_list": [ 00:12:42.645 { 00:12:42.645 "name": "spare", 00:12:42.645 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:42.645 "is_configured": true, 00:12:42.645 "data_offset": 0, 00:12:42.645 "data_size": 65536 00:12:42.645 }, 00:12:42.645 { 00:12:42.646 "name": "BaseBdev2", 00:12:42.646 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:42.646 "is_configured": true, 00:12:42.646 "data_offset": 0, 00:12:42.646 "data_size": 65536 00:12:42.646 } 00:12:42.646 ] 00:12:42.646 }' 00:12:42.646 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.646 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.646 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.646 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.646 17:54:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.212 95.17 IOPS, 285.50 MiB/s [2024-10-25T17:54:01.648Z] [2024-10-25 17:54:01.374146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:43.212 [2024-10-25 17:54:01.581633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:43.212 [2024-10-25 17:54:01.582003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.782 17:54:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.782 85.29 IOPS, 255.86 MiB/s [2024-10-25T17:54:02.218Z] 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.782 "name": "raid_bdev1", 00:12:43.782 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:43.782 "strip_size_kb": 0, 00:12:43.782 "state": "online", 00:12:43.782 "raid_level": "raid1", 00:12:43.782 "superblock": false, 00:12:43.782 "num_base_bdevs": 2, 00:12:43.782 "num_base_bdevs_discovered": 2, 00:12:43.782 "num_base_bdevs_operational": 2, 00:12:43.782 "process": { 00:12:43.782 "type": "rebuild", 00:12:43.782 "target": "spare", 00:12:43.782 "progress": { 00:12:43.782 "blocks": 63488, 00:12:43.782 "percent": 96 00:12:43.782 } 00:12:43.782 }, 00:12:43.782 "base_bdevs_list": [ 00:12:43.782 { 00:12:43.782 "name": "spare", 00:12:43.782 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:43.782 "is_configured": true, 00:12:43.782 "data_offset": 0, 00:12:43.782 "data_size": 65536 00:12:43.782 }, 00:12:43.782 { 00:12:43.782 "name": "BaseBdev2", 00:12:43.782 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:43.782 "is_configured": true, 00:12:43.782 "data_offset": 0, 00:12:43.782 "data_size": 65536 00:12:43.782 } 00:12:43.782 ] 00:12:43.782 }' 00:12:43.782 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.782 [2024-10-25 17:54:02.020896] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.782 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.782 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.782 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.782 17:54:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.782 [2024-10-25 17:54:02.127021] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.782 [2024-10-25 17:54:02.129503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.721 79.38 IOPS, 238.12 MiB/s [2024-10-25T17:54:03.157Z] 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.721 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.982 "name": "raid_bdev1", 00:12:44.982 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:44.982 "strip_size_kb": 0, 00:12:44.982 "state": "online", 00:12:44.982 "raid_level": "raid1", 00:12:44.982 "superblock": false, 00:12:44.982 "num_base_bdevs": 2, 00:12:44.982 "num_base_bdevs_discovered": 2, 00:12:44.982 "num_base_bdevs_operational": 2, 00:12:44.982 "base_bdevs_list": [ 00:12:44.982 { 00:12:44.982 "name": "spare", 00:12:44.982 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:44.982 "is_configured": true, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 }, 00:12:44.982 { 00:12:44.982 "name": "BaseBdev2", 00:12:44.982 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:44.982 "is_configured": true, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 } 00:12:44.982 ] 00:12:44.982 }' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.982 "name": "raid_bdev1", 00:12:44.982 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:44.982 "strip_size_kb": 0, 00:12:44.982 "state": "online", 00:12:44.982 "raid_level": "raid1", 00:12:44.982 "superblock": false, 00:12:44.982 "num_base_bdevs": 2, 00:12:44.982 "num_base_bdevs_discovered": 2, 00:12:44.982 "num_base_bdevs_operational": 2, 00:12:44.982 "base_bdevs_list": [ 00:12:44.982 { 00:12:44.982 "name": "spare", 00:12:44.982 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:44.982 "is_configured": true, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 }, 00:12:44.982 { 00:12:44.982 "name": "BaseBdev2", 00:12:44.982 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:44.982 "is_configured": true, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 } 00:12:44.982 ] 00:12:44.982 }' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.982 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.983 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.242 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.242 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.242 "name": "raid_bdev1", 00:12:45.242 "uuid": "6516c756-5093-41b8-b51a-edd46986c965", 00:12:45.242 "strip_size_kb": 0, 00:12:45.242 "state": "online", 00:12:45.242 "raid_level": "raid1", 00:12:45.242 "superblock": false, 00:12:45.242 "num_base_bdevs": 2, 00:12:45.242 "num_base_bdevs_discovered": 2, 00:12:45.242 "num_base_bdevs_operational": 2, 00:12:45.242 "base_bdevs_list": [ 00:12:45.242 { 00:12:45.242 "name": "spare", 00:12:45.242 "uuid": "05757461-e5ac-52bf-9d17-c660d11da488", 00:12:45.242 "is_configured": true, 00:12:45.242 "data_offset": 0, 00:12:45.242 "data_size": 65536 00:12:45.242 }, 00:12:45.242 { 00:12:45.242 "name": "BaseBdev2", 00:12:45.242 "uuid": "ffc1e279-47a1-5db8-a43e-d3f5f0a7dc8a", 00:12:45.242 "is_configured": true, 00:12:45.242 "data_offset": 0, 00:12:45.242 "data_size": 65536 00:12:45.242 } 00:12:45.242 ] 00:12:45.242 }' 00:12:45.242 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.242 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.501 17:54:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.501 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.501 17:54:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.501 [2024-10-25 17:54:03.887120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.501 [2024-10-25 17:54:03.887164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.761 00:12:45.761 Latency(us) 00:12:45.761 [2024-10-25T17:54:04.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.761 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:45.761 raid_bdev1 : 9.00 73.85 221.55 0.00 0.00 18535.38 354.15 121799.66 00:12:45.761 [2024-10-25T17:54:04.197Z] =================================================================================================================== 00:12:45.761 [2024-10-25T17:54:04.197Z] Total : 73.85 221.55 0.00 0.00 18535.38 354.15 121799.66 00:12:45.761 [2024-10-25 17:54:03.997512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.761 [2024-10-25 17:54:03.997574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.761 [2024-10-25 17:54:03.997667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.761 [2024-10-25 17:54:03.997681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:45.761 { 00:12:45.761 "results": [ 00:12:45.761 { 00:12:45.761 "job": "raid_bdev1", 00:12:45.761 "core_mask": "0x1", 00:12:45.761 "workload": "randrw", 00:12:45.761 "percentage": 50, 00:12:45.761 "status": "finished", 00:12:45.761 "queue_depth": 2, 00:12:45.762 "io_size": 3145728, 00:12:45.762 "runtime": 9.004543, 00:12:45.762 "iops": 73.85161023718805, 00:12:45.762 "mibps": 221.55483071156416, 00:12:45.762 "io_failed": 0, 00:12:45.762 "io_timeout": 0, 00:12:45.762 "avg_latency_us": 18535.383755458515, 00:12:45.762 "min_latency_us": 354.15196506550217, 00:12:45.762 "max_latency_us": 121799.6576419214 00:12:45.762 } 00:12:45.762 ], 00:12:45.762 "core_count": 1 00:12:45.762 } 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.762 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:46.021 /dev/nbd0 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.021 1+0 records in 00:12:46.021 1+0 records out 00:12:46.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297852 s, 13.8 MB/s 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.021 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:46.281 /dev/nbd1 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.281 1+0 records in 00:12:46.281 1+0 records out 00:12:46.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570935 s, 7.2 MB/s 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.281 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:46.540 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:46.540 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.540 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:46.541 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.541 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.541 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.541 17:54:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.803 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76251 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76251 ']' 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76251 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.062 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76251 00:12:47.062 killing process with pid 76251 00:12:47.062 Received shutdown signal, test time was about 10.390991 seconds 00:12:47.062 00:12:47.062 Latency(us) 00:12:47.063 [2024-10-25T17:54:05.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.063 [2024-10-25T17:54:05.499Z] =================================================================================================================== 00:12:47.063 [2024-10-25T17:54:05.499Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.063 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.063 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.063 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76251' 00:12:47.063 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76251 00:12:47.063 [2024-10-25 17:54:05.354694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.063 17:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76251 00:12:47.321 [2024-10-25 17:54:05.607649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:48.700 00:12:48.700 real 0m13.830s 00:12:48.700 user 0m17.289s 00:12:48.700 sys 0m1.616s 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.700 ************************************ 00:12:48.700 END TEST raid_rebuild_test_io 00:12:48.700 ************************************ 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.700 17:54:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:48.700 17:54:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:48.700 17:54:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.700 17:54:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.700 ************************************ 00:12:48.700 START TEST raid_rebuild_test_sb_io 00:12:48.700 ************************************ 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76656 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76656 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76656 ']' 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.700 17:54:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:48.700 Zero copy mechanism will not be used. 00:12:48.700 [2024-10-25 17:54:07.057670] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:48.700 [2024-10-25 17:54:07.057880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76656 ] 00:12:48.961 [2024-10-25 17:54:07.239853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.961 [2024-10-25 17:54:07.374423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.220 [2024-10-25 17:54:07.603875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.220 [2024-10-25 17:54:07.603937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.788 17:54:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 BaseBdev1_malloc 00:12:49.788 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.788 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:49.788 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 [2024-10-25 17:54:08.047447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:49.789 [2024-10-25 17:54:08.047566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.789 [2024-10-25 17:54:08.047600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:49.789 [2024-10-25 17:54:08.047618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.789 [2024-10-25 17:54:08.050196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.789 [2024-10-25 17:54:08.050262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.789 BaseBdev1 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 BaseBdev2_malloc 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 [2024-10-25 17:54:08.109575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:49.789 [2024-10-25 17:54:08.109815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.789 [2024-10-25 17:54:08.109913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:49.789 [2024-10-25 17:54:08.109968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.789 [2024-10-25 17:54:08.112598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.789 [2024-10-25 17:54:08.112737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.789 BaseBdev2 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 spare_malloc 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 spare_delay 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 [2024-10-25 17:54:08.200242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.789 [2024-10-25 17:54:08.200457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.789 [2024-10-25 17:54:08.200517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:49.789 [2024-10-25 17:54:08.200571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.789 [2024-10-25 17:54:08.203316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.789 [2024-10-25 17:54:08.203450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.789 spare 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.789 [2024-10-25 17:54:08.212294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.789 [2024-10-25 17:54:08.214573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.789 [2024-10-25 17:54:08.214906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:49.789 [2024-10-25 17:54:08.214980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.789 [2024-10-25 17:54:08.215363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:49.789 [2024-10-25 17:54:08.215643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:49.789 [2024-10-25 17:54:08.215700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:49.789 [2024-10-25 17:54:08.216025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.789 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.049 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.049 "name": "raid_bdev1", 00:12:50.049 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:50.049 "strip_size_kb": 0, 00:12:50.049 "state": "online", 00:12:50.049 "raid_level": "raid1", 00:12:50.049 "superblock": true, 00:12:50.049 "num_base_bdevs": 2, 00:12:50.049 "num_base_bdevs_discovered": 2, 00:12:50.049 "num_base_bdevs_operational": 2, 00:12:50.049 "base_bdevs_list": [ 00:12:50.049 { 00:12:50.049 "name": "BaseBdev1", 00:12:50.049 "uuid": "557f9bc6-85b5-5ae2-9504-cc97bdc073e1", 00:12:50.049 "is_configured": true, 00:12:50.049 "data_offset": 2048, 00:12:50.049 "data_size": 63488 00:12:50.049 }, 00:12:50.049 { 00:12:50.049 "name": "BaseBdev2", 00:12:50.049 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:50.050 "is_configured": true, 00:12:50.050 "data_offset": 2048, 00:12:50.050 "data_size": 63488 00:12:50.050 } 00:12:50.050 ] 00:12:50.050 }' 00:12:50.050 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.050 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.309 [2024-10-25 17:54:08.667959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.309 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.569 [2024-10-25 17:54:08.771399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.569 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.569 "name": "raid_bdev1", 00:12:50.569 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:50.569 "strip_size_kb": 0, 00:12:50.569 "state": "online", 00:12:50.569 "raid_level": "raid1", 00:12:50.569 "superblock": true, 00:12:50.569 "num_base_bdevs": 2, 00:12:50.569 "num_base_bdevs_discovered": 1, 00:12:50.569 "num_base_bdevs_operational": 1, 00:12:50.569 "base_bdevs_list": [ 00:12:50.569 { 00:12:50.569 "name": null, 00:12:50.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.569 "is_configured": false, 00:12:50.569 "data_offset": 0, 00:12:50.569 "data_size": 63488 00:12:50.569 }, 00:12:50.569 { 00:12:50.569 "name": "BaseBdev2", 00:12:50.569 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:50.569 "is_configured": true, 00:12:50.569 "data_offset": 2048, 00:12:50.569 "data_size": 63488 00:12:50.569 } 00:12:50.569 ] 00:12:50.569 }' 00:12:50.570 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.570 17:54:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.570 [2024-10-25 17:54:08.892140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:50.570 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.570 Zero copy mechanism will not be used. 00:12:50.570 Running I/O for 60 seconds... 00:12:50.829 17:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.829 17:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.829 17:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.829 [2024-10-25 17:54:09.257103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.088 17:54:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.089 17:54:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.089 [2024-10-25 17:54:09.337498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:51.089 [2024-10-25 17:54:09.339815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.089 [2024-10-25 17:54:09.457966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.089 [2024-10-25 17:54:09.458736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.348 [2024-10-25 17:54:09.610927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.607 173.00 IOPS, 519.00 MiB/s [2024-10-25T17:54:10.043Z] [2024-10-25 17:54:09.942889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.607 [2024-10-25 17:54:09.943563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.866 [2024-10-25 17:54:10.145407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.867 [2024-10-25 17:54:10.145791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.126 "name": "raid_bdev1", 00:12:52.126 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:52.126 "strip_size_kb": 0, 00:12:52.126 "state": "online", 00:12:52.126 "raid_level": "raid1", 00:12:52.126 "superblock": true, 00:12:52.126 "num_base_bdevs": 2, 00:12:52.126 "num_base_bdevs_discovered": 2, 00:12:52.126 "num_base_bdevs_operational": 2, 00:12:52.126 "process": { 00:12:52.126 "type": "rebuild", 00:12:52.126 "target": "spare", 00:12:52.126 "progress": { 00:12:52.126 "blocks": 10240, 00:12:52.126 "percent": 16 00:12:52.126 } 00:12:52.126 }, 00:12:52.126 "base_bdevs_list": [ 00:12:52.126 { 00:12:52.126 "name": "spare", 00:12:52.126 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:52.126 "is_configured": true, 00:12:52.126 "data_offset": 2048, 00:12:52.126 "data_size": 63488 00:12:52.126 }, 00:12:52.126 { 00:12:52.126 "name": "BaseBdev2", 00:12:52.126 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:52.126 "is_configured": true, 00:12:52.126 "data_offset": 2048, 00:12:52.126 "data_size": 63488 00:12:52.126 } 00:12:52.126 ] 00:12:52.126 }' 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.126 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.127 [2024-10-25 17:54:10.471054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.127 [2024-10-25 17:54:10.500462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.386 [2024-10-25 17:54:10.616161] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.386 [2024-10-25 17:54:10.626460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.386 [2024-10-25 17:54:10.626632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.386 [2024-10-25 17:54:10.626663] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.386 [2024-10-25 17:54:10.684963] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:52.386 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.386 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.387 "name": "raid_bdev1", 00:12:52.387 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:52.387 "strip_size_kb": 0, 00:12:52.387 "state": "online", 00:12:52.387 "raid_level": "raid1", 00:12:52.387 "superblock": true, 00:12:52.387 "num_base_bdevs": 2, 00:12:52.387 "num_base_bdevs_discovered": 1, 00:12:52.387 "num_base_bdevs_operational": 1, 00:12:52.387 "base_bdevs_list": [ 00:12:52.387 { 00:12:52.387 "name": null, 00:12:52.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.387 "is_configured": false, 00:12:52.387 "data_offset": 0, 00:12:52.387 "data_size": 63488 00:12:52.387 }, 00:12:52.387 { 00:12:52.387 "name": "BaseBdev2", 00:12:52.387 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:52.387 "is_configured": true, 00:12:52.387 "data_offset": 2048, 00:12:52.387 "data_size": 63488 00:12:52.387 } 00:12:52.387 ] 00:12:52.387 }' 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.387 17:54:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.906 120.00 IOPS, 360.00 MiB/s [2024-10-25T17:54:11.342Z] 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.906 "name": "raid_bdev1", 00:12:52.906 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:52.906 "strip_size_kb": 0, 00:12:52.906 "state": "online", 00:12:52.906 "raid_level": "raid1", 00:12:52.906 "superblock": true, 00:12:52.906 "num_base_bdevs": 2, 00:12:52.906 "num_base_bdevs_discovered": 1, 00:12:52.906 "num_base_bdevs_operational": 1, 00:12:52.906 "base_bdevs_list": [ 00:12:52.906 { 00:12:52.906 "name": null, 00:12:52.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.906 "is_configured": false, 00:12:52.906 "data_offset": 0, 00:12:52.906 "data_size": 63488 00:12:52.906 }, 00:12:52.906 { 00:12:52.906 "name": "BaseBdev2", 00:12:52.906 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:52.906 "is_configured": true, 00:12:52.906 "data_offset": 2048, 00:12:52.906 "data_size": 63488 00:12:52.906 } 00:12:52.906 ] 00:12:52.906 }' 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.906 [2024-10-25 17:54:11.282947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.906 17:54:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:53.165 [2024-10-25 17:54:11.347559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:53.165 [2024-10-25 17:54:11.349822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.165 [2024-10-25 17:54:11.460549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.165 [2024-10-25 17:54:11.461317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.165 [2024-10-25 17:54:11.585285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.165 [2024-10-25 17:54:11.585794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.734 130.00 IOPS, 390.00 MiB/s [2024-10-25T17:54:12.170Z] [2024-10-25 17:54:11.924043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:53.734 [2024-10-25 17:54:12.135068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.001 [2024-10-25 17:54:12.355521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.001 "name": "raid_bdev1", 00:12:54.001 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:54.001 "strip_size_kb": 0, 00:12:54.001 "state": "online", 00:12:54.001 "raid_level": "raid1", 00:12:54.001 "superblock": true, 00:12:54.001 "num_base_bdevs": 2, 00:12:54.001 "num_base_bdevs_discovered": 2, 00:12:54.001 "num_base_bdevs_operational": 2, 00:12:54.001 "process": { 00:12:54.001 "type": "rebuild", 00:12:54.001 "target": "spare", 00:12:54.001 "progress": { 00:12:54.001 "blocks": 14336, 00:12:54.001 "percent": 22 00:12:54.001 } 00:12:54.001 }, 00:12:54.001 "base_bdevs_list": [ 00:12:54.001 { 00:12:54.001 "name": "spare", 00:12:54.001 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:54.001 "is_configured": true, 00:12:54.001 "data_offset": 2048, 00:12:54.001 "data_size": 63488 00:12:54.001 }, 00:12:54.001 { 00:12:54.001 "name": "BaseBdev2", 00:12:54.001 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:54.001 "is_configured": true, 00:12:54.001 "data_offset": 2048, 00:12:54.001 "data_size": 63488 00:12:54.001 } 00:12:54.001 ] 00:12:54.001 }' 00:12:54.001 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.271 [2024-10-25 17:54:12.477033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.271 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=417 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.271 "name": "raid_bdev1", 00:12:54.271 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:54.271 "strip_size_kb": 0, 00:12:54.271 "state": "online", 00:12:54.271 "raid_level": "raid1", 00:12:54.271 "superblock": true, 00:12:54.271 "num_base_bdevs": 2, 00:12:54.271 "num_base_bdevs_discovered": 2, 00:12:54.271 "num_base_bdevs_operational": 2, 00:12:54.271 "process": { 00:12:54.271 "type": "rebuild", 00:12:54.271 "target": "spare", 00:12:54.271 "progress": { 00:12:54.271 "blocks": 16384, 00:12:54.271 "percent": 25 00:12:54.271 } 00:12:54.271 }, 00:12:54.271 "base_bdevs_list": [ 00:12:54.271 { 00:12:54.271 "name": "spare", 00:12:54.271 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:54.271 "is_configured": true, 00:12:54.271 "data_offset": 2048, 00:12:54.271 "data_size": 63488 00:12:54.271 }, 00:12:54.271 { 00:12:54.271 "name": "BaseBdev2", 00:12:54.271 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:54.271 "is_configured": true, 00:12:54.271 "data_offset": 2048, 00:12:54.271 "data_size": 63488 00:12:54.271 } 00:12:54.271 ] 00:12:54.271 }' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.271 17:54:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.531 [2024-10-25 17:54:12.860218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:54.789 126.00 IOPS, 378.00 MiB/s [2024-10-25T17:54:13.225Z] [2024-10-25 17:54:13.199069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:54.789 [2024-10-25 17:54:13.199698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:55.048 [2024-10-25 17:54:13.319249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.307 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.308 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.308 "name": "raid_bdev1", 00:12:55.308 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:55.308 "strip_size_kb": 0, 00:12:55.308 "state": "online", 00:12:55.308 "raid_level": "raid1", 00:12:55.308 "superblock": true, 00:12:55.308 "num_base_bdevs": 2, 00:12:55.308 "num_base_bdevs_discovered": 2, 00:12:55.308 "num_base_bdevs_operational": 2, 00:12:55.308 "process": { 00:12:55.308 "type": "rebuild", 00:12:55.308 "target": "spare", 00:12:55.308 "progress": { 00:12:55.308 "blocks": 30720, 00:12:55.308 "percent": 48 00:12:55.308 } 00:12:55.308 }, 00:12:55.308 "base_bdevs_list": [ 00:12:55.308 { 00:12:55.308 "name": "spare", 00:12:55.308 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:55.308 "is_configured": true, 00:12:55.308 "data_offset": 2048, 00:12:55.308 "data_size": 63488 00:12:55.308 }, 00:12:55.308 { 00:12:55.308 "name": "BaseBdev2", 00:12:55.308 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:55.308 "is_configured": true, 00:12:55.308 "data_offset": 2048, 00:12:55.308 "data_size": 63488 00:12:55.308 } 00:12:55.308 ] 00:12:55.308 }' 00:12:55.308 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.567 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.567 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.567 [2024-10-25 17:54:13.772814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:55.567 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.567 17:54:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.567 111.40 IOPS, 334.20 MiB/s [2024-10-25T17:54:14.003Z] [2024-10-25 17:54:13.991563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:55.567 [2024-10-25 17:54:13.992328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:55.826 [2024-10-25 17:54:14.105290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:56.085 [2024-10-25 17:54:14.349879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:56.085 [2024-10-25 17:54:14.467329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:56.085 [2024-10-25 17:54:14.467778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.653 "name": "raid_bdev1", 00:12:56.653 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:56.653 "strip_size_kb": 0, 00:12:56.653 "state": "online", 00:12:56.653 "raid_level": "raid1", 00:12:56.653 "superblock": true, 00:12:56.653 "num_base_bdevs": 2, 00:12:56.653 "num_base_bdevs_discovered": 2, 00:12:56.653 "num_base_bdevs_operational": 2, 00:12:56.653 "process": { 00:12:56.653 "type": "rebuild", 00:12:56.653 "target": "spare", 00:12:56.653 "progress": { 00:12:56.653 "blocks": 53248, 00:12:56.653 "percent": 83 00:12:56.653 } 00:12:56.653 }, 00:12:56.653 "base_bdevs_list": [ 00:12:56.653 { 00:12:56.653 "name": "spare", 00:12:56.653 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:56.653 "is_configured": true, 00:12:56.653 "data_offset": 2048, 00:12:56.653 "data_size": 63488 00:12:56.653 }, 00:12:56.653 { 00:12:56.653 "name": "BaseBdev2", 00:12:56.653 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:56.653 "is_configured": true, 00:12:56.653 "data_offset": 2048, 00:12:56.653 "data_size": 63488 00:12:56.653 } 00:12:56.653 ] 00:12:56.653 }' 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.653 101.00 IOPS, 303.00 MiB/s [2024-10-25T17:54:15.089Z] 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.653 17:54:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.911 [2024-10-25 17:54:15.319952] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.170 [2024-10-25 17:54:15.351195] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.170 [2024-10-25 17:54:15.354007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.737 92.86 IOPS, 278.57 MiB/s [2024-10-25T17:54:16.173Z] 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.737 17:54:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.737 "name": "raid_bdev1", 00:12:57.737 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:57.737 "strip_size_kb": 0, 00:12:57.737 "state": "online", 00:12:57.737 "raid_level": "raid1", 00:12:57.737 "superblock": true, 00:12:57.737 "num_base_bdevs": 2, 00:12:57.737 "num_base_bdevs_discovered": 2, 00:12:57.737 "num_base_bdevs_operational": 2, 00:12:57.737 "base_bdevs_list": [ 00:12:57.737 { 00:12:57.737 "name": "spare", 00:12:57.737 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:57.737 "is_configured": true, 00:12:57.737 "data_offset": 2048, 00:12:57.737 "data_size": 63488 00:12:57.737 }, 00:12:57.737 { 00:12:57.737 "name": "BaseBdev2", 00:12:57.737 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:57.737 "is_configured": true, 00:12:57.737 "data_offset": 2048, 00:12:57.737 "data_size": 63488 00:12:57.737 } 00:12:57.737 ] 00:12:57.737 }' 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.737 "name": "raid_bdev1", 00:12:57.737 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:57.737 "strip_size_kb": 0, 00:12:57.737 "state": "online", 00:12:57.737 "raid_level": "raid1", 00:12:57.737 "superblock": true, 00:12:57.737 "num_base_bdevs": 2, 00:12:57.737 "num_base_bdevs_discovered": 2, 00:12:57.737 "num_base_bdevs_operational": 2, 00:12:57.737 "base_bdevs_list": [ 00:12:57.737 { 00:12:57.737 "name": "spare", 00:12:57.737 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:57.737 "is_configured": true, 00:12:57.737 "data_offset": 2048, 00:12:57.737 "data_size": 63488 00:12:57.737 }, 00:12:57.737 { 00:12:57.737 "name": "BaseBdev2", 00:12:57.737 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:57.737 "is_configured": true, 00:12:57.737 "data_offset": 2048, 00:12:57.737 "data_size": 63488 00:12:57.737 } 00:12:57.737 ] 00:12:57.737 }' 00:12:57.737 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.996 "name": "raid_bdev1", 00:12:57.996 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:12:57.996 "strip_size_kb": 0, 00:12:57.996 "state": "online", 00:12:57.996 "raid_level": "raid1", 00:12:57.996 "superblock": true, 00:12:57.996 "num_base_bdevs": 2, 00:12:57.996 "num_base_bdevs_discovered": 2, 00:12:57.996 "num_base_bdevs_operational": 2, 00:12:57.996 "base_bdevs_list": [ 00:12:57.996 { 00:12:57.996 "name": "spare", 00:12:57.996 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:12:57.996 "is_configured": true, 00:12:57.996 "data_offset": 2048, 00:12:57.996 "data_size": 63488 00:12:57.996 }, 00:12:57.996 { 00:12:57.996 "name": "BaseBdev2", 00:12:57.996 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:12:57.996 "is_configured": true, 00:12:57.996 "data_offset": 2048, 00:12:57.996 "data_size": 63488 00:12:57.996 } 00:12:57.996 ] 00:12:57.996 }' 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.996 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.563 [2024-10-25 17:54:16.722450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.563 [2024-10-25 17:54:16.722604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.563 00:12:58.563 Latency(us) 00:12:58.563 [2024-10-25T17:54:16.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.563 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:58.563 raid_bdev1 : 7.93 85.36 256.09 0.00 0.00 15889.47 413.18 114931.26 00:12:58.563 [2024-10-25T17:54:16.999Z] =================================================================================================================== 00:12:58.563 [2024-10-25T17:54:16.999Z] Total : 85.36 256.09 0.00 0.00 15889.47 413.18 114931.26 00:12:58.563 [2024-10-25 17:54:16.838614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.563 { 00:12:58.563 "results": [ 00:12:58.563 { 00:12:58.563 "job": "raid_bdev1", 00:12:58.563 "core_mask": "0x1", 00:12:58.563 "workload": "randrw", 00:12:58.563 "percentage": 50, 00:12:58.563 "status": "finished", 00:12:58.563 "queue_depth": 2, 00:12:58.563 "io_size": 3145728, 00:12:58.563 "runtime": 7.930673, 00:12:58.563 "iops": 85.36476034253334, 00:12:58.563 "mibps": 256.0942810276, 00:12:58.563 "io_failed": 0, 00:12:58.563 "io_timeout": 0, 00:12:58.563 "avg_latency_us": 15889.472326536934, 00:12:58.563 "min_latency_us": 413.17729257641923, 00:12:58.563 "max_latency_us": 114931.2558951965 00:12:58.563 } 00:12:58.563 ], 00:12:58.563 "core_count": 1 00:12:58.563 } 00:12:58.563 [2024-10-25 17:54:16.838807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.563 [2024-10-25 17:54:16.838961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.563 [2024-10-25 17:54:16.838984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.563 17:54:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:58.822 /dev/nbd0 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.822 1+0 records in 00:12:58.822 1+0 records out 00:12:58.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389907 s, 10.5 MB/s 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.822 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:59.081 /dev/nbd1 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.081 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.082 1+0 records in 00:12:59.082 1+0 records out 00:12:59.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356469 s, 11.5 MB/s 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.082 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.340 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.599 17:54:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.858 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.116 [2024-10-25 17:54:18.295683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.116 [2024-10-25 17:54:18.295783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.116 [2024-10-25 17:54:18.295811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:00.116 [2024-10-25 17:54:18.295844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.116 [2024-10-25 17:54:18.298716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.116 [2024-10-25 17:54:18.298782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.116 [2024-10-25 17:54:18.298924] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:00.116 [2024-10-25 17:54:18.299032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.116 [2024-10-25 17:54:18.299249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.116 spare 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.116 [2024-10-25 17:54:18.399208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:00.116 [2024-10-25 17:54:18.399260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.116 [2024-10-25 17:54:18.399694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:00.116 [2024-10-25 17:54:18.399974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:00.116 [2024-10-25 17:54:18.400000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:00.116 [2024-10-25 17:54:18.400290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.116 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.116 "name": "raid_bdev1", 00:13:00.116 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:00.116 "strip_size_kb": 0, 00:13:00.116 "state": "online", 00:13:00.116 "raid_level": "raid1", 00:13:00.116 "superblock": true, 00:13:00.116 "num_base_bdevs": 2, 00:13:00.116 "num_base_bdevs_discovered": 2, 00:13:00.116 "num_base_bdevs_operational": 2, 00:13:00.116 "base_bdevs_list": [ 00:13:00.116 { 00:13:00.116 "name": "spare", 00:13:00.116 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:13:00.116 "is_configured": true, 00:13:00.116 "data_offset": 2048, 00:13:00.117 "data_size": 63488 00:13:00.117 }, 00:13:00.117 { 00:13:00.117 "name": "BaseBdev2", 00:13:00.117 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:00.117 "is_configured": true, 00:13:00.117 "data_offset": 2048, 00:13:00.117 "data_size": 63488 00:13:00.117 } 00:13:00.117 ] 00:13:00.117 }' 00:13:00.117 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.117 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.684 "name": "raid_bdev1", 00:13:00.684 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:00.684 "strip_size_kb": 0, 00:13:00.684 "state": "online", 00:13:00.684 "raid_level": "raid1", 00:13:00.684 "superblock": true, 00:13:00.684 "num_base_bdevs": 2, 00:13:00.684 "num_base_bdevs_discovered": 2, 00:13:00.684 "num_base_bdevs_operational": 2, 00:13:00.684 "base_bdevs_list": [ 00:13:00.684 { 00:13:00.684 "name": "spare", 00:13:00.684 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:13:00.684 "is_configured": true, 00:13:00.684 "data_offset": 2048, 00:13:00.684 "data_size": 63488 00:13:00.684 }, 00:13:00.684 { 00:13:00.684 "name": "BaseBdev2", 00:13:00.684 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:00.684 "is_configured": true, 00:13:00.684 "data_offset": 2048, 00:13:00.684 "data_size": 63488 00:13:00.684 } 00:13:00.684 ] 00:13:00.684 }' 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.684 17:54:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.684 [2024-10-25 17:54:19.096070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.684 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.943 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.943 "name": "raid_bdev1", 00:13:00.943 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:00.943 "strip_size_kb": 0, 00:13:00.943 "state": "online", 00:13:00.943 "raid_level": "raid1", 00:13:00.943 "superblock": true, 00:13:00.943 "num_base_bdevs": 2, 00:13:00.943 "num_base_bdevs_discovered": 1, 00:13:00.943 "num_base_bdevs_operational": 1, 00:13:00.943 "base_bdevs_list": [ 00:13:00.943 { 00:13:00.943 "name": null, 00:13:00.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.943 "is_configured": false, 00:13:00.943 "data_offset": 0, 00:13:00.943 "data_size": 63488 00:13:00.943 }, 00:13:00.943 { 00:13:00.943 "name": "BaseBdev2", 00:13:00.943 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:00.943 "is_configured": true, 00:13:00.943 "data_offset": 2048, 00:13:00.943 "data_size": 63488 00:13:00.943 } 00:13:00.943 ] 00:13:00.943 }' 00:13:00.943 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.943 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.202 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.202 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.202 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.202 [2024-10-25 17:54:19.572054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.202 [2024-10-25 17:54:19.572428] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.202 [2024-10-25 17:54:19.572455] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.202 [2024-10-25 17:54:19.572501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.202 [2024-10-25 17:54:19.592889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:01.202 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.202 17:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:01.202 [2024-10-25 17:54:19.595248] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.578 "name": "raid_bdev1", 00:13:02.578 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:02.578 "strip_size_kb": 0, 00:13:02.578 "state": "online", 00:13:02.578 "raid_level": "raid1", 00:13:02.578 "superblock": true, 00:13:02.578 "num_base_bdevs": 2, 00:13:02.578 "num_base_bdevs_discovered": 2, 00:13:02.578 "num_base_bdevs_operational": 2, 00:13:02.578 "process": { 00:13:02.578 "type": "rebuild", 00:13:02.578 "target": "spare", 00:13:02.578 "progress": { 00:13:02.578 "blocks": 20480, 00:13:02.578 "percent": 32 00:13:02.578 } 00:13:02.578 }, 00:13:02.578 "base_bdevs_list": [ 00:13:02.578 { 00:13:02.578 "name": "spare", 00:13:02.578 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:13:02.578 "is_configured": true, 00:13:02.578 "data_offset": 2048, 00:13:02.578 "data_size": 63488 00:13:02.578 }, 00:13:02.578 { 00:13:02.578 "name": "BaseBdev2", 00:13:02.578 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:02.578 "is_configured": true, 00:13:02.578 "data_offset": 2048, 00:13:02.578 "data_size": 63488 00:13:02.578 } 00:13:02.578 ] 00:13:02.578 }' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 [2024-10-25 17:54:20.723055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.578 [2024-10-25 17:54:20.801927] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.578 [2024-10-25 17:54:20.802025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.578 [2024-10-25 17:54:20.802049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.578 [2024-10-25 17:54:20.802058] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.578 "name": "raid_bdev1", 00:13:02.578 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:02.578 "strip_size_kb": 0, 00:13:02.578 "state": "online", 00:13:02.578 "raid_level": "raid1", 00:13:02.578 "superblock": true, 00:13:02.578 "num_base_bdevs": 2, 00:13:02.578 "num_base_bdevs_discovered": 1, 00:13:02.578 "num_base_bdevs_operational": 1, 00:13:02.578 "base_bdevs_list": [ 00:13:02.578 { 00:13:02.578 "name": null, 00:13:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.578 "is_configured": false, 00:13:02.578 "data_offset": 0, 00:13:02.578 "data_size": 63488 00:13:02.578 }, 00:13:02.578 { 00:13:02.578 "name": "BaseBdev2", 00:13:02.578 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:02.578 "is_configured": true, 00:13:02.578 "data_offset": 2048, 00:13:02.578 "data_size": 63488 00:13:02.578 } 00:13:02.578 ] 00:13:02.578 }' 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.578 17:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.205 17:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.205 17:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.205 17:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.205 [2024-10-25 17:54:21.320182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.205 [2024-10-25 17:54:21.320389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.205 [2024-10-25 17:54:21.320427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:03.205 [2024-10-25 17:54:21.320439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.205 [2024-10-25 17:54:21.321000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.205 [2024-10-25 17:54:21.321023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.205 [2024-10-25 17:54:21.321138] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:03.205 [2024-10-25 17:54:21.321163] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:03.205 [2024-10-25 17:54:21.321177] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:03.205 [2024-10-25 17:54:21.321214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.205 [2024-10-25 17:54:21.340411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:03.205 spare 00:13:03.205 17:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.205 [2024-10-25 17:54:21.342575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.205 17:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.142 "name": "raid_bdev1", 00:13:04.142 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:04.142 "strip_size_kb": 0, 00:13:04.142 "state": "online", 00:13:04.142 "raid_level": "raid1", 00:13:04.142 "superblock": true, 00:13:04.142 "num_base_bdevs": 2, 00:13:04.142 "num_base_bdevs_discovered": 2, 00:13:04.142 "num_base_bdevs_operational": 2, 00:13:04.142 "process": { 00:13:04.142 "type": "rebuild", 00:13:04.142 "target": "spare", 00:13:04.142 "progress": { 00:13:04.142 "blocks": 20480, 00:13:04.142 "percent": 32 00:13:04.142 } 00:13:04.142 }, 00:13:04.142 "base_bdevs_list": [ 00:13:04.142 { 00:13:04.142 "name": "spare", 00:13:04.142 "uuid": "e7ded1af-926b-5e62-a2ef-83183b32757c", 00:13:04.142 "is_configured": true, 00:13:04.142 "data_offset": 2048, 00:13:04.142 "data_size": 63488 00:13:04.142 }, 00:13:04.142 { 00:13:04.142 "name": "BaseBdev2", 00:13:04.142 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:04.142 "is_configured": true, 00:13:04.142 "data_offset": 2048, 00:13:04.142 "data_size": 63488 00:13:04.142 } 00:13:04.142 ] 00:13:04.142 }' 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.142 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.142 [2024-10-25 17:54:22.490606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.142 [2024-10-25 17:54:22.548950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.142 [2024-10-25 17:54:22.549060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.142 [2024-10-25 17:54:22.549077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.142 [2024-10-25 17:54:22.549087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.403 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.403 "name": "raid_bdev1", 00:13:04.403 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:04.403 "strip_size_kb": 0, 00:13:04.403 "state": "online", 00:13:04.403 "raid_level": "raid1", 00:13:04.403 "superblock": true, 00:13:04.403 "num_base_bdevs": 2, 00:13:04.403 "num_base_bdevs_discovered": 1, 00:13:04.403 "num_base_bdevs_operational": 1, 00:13:04.403 "base_bdevs_list": [ 00:13:04.403 { 00:13:04.403 "name": null, 00:13:04.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.403 "is_configured": false, 00:13:04.403 "data_offset": 0, 00:13:04.403 "data_size": 63488 00:13:04.403 }, 00:13:04.404 { 00:13:04.404 "name": "BaseBdev2", 00:13:04.404 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:04.404 "is_configured": true, 00:13:04.404 "data_offset": 2048, 00:13:04.404 "data_size": 63488 00:13:04.404 } 00:13:04.404 ] 00:13:04.404 }' 00:13:04.404 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.404 17:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.974 "name": "raid_bdev1", 00:13:04.974 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:04.974 "strip_size_kb": 0, 00:13:04.974 "state": "online", 00:13:04.974 "raid_level": "raid1", 00:13:04.974 "superblock": true, 00:13:04.974 "num_base_bdevs": 2, 00:13:04.974 "num_base_bdevs_discovered": 1, 00:13:04.974 "num_base_bdevs_operational": 1, 00:13:04.974 "base_bdevs_list": [ 00:13:04.974 { 00:13:04.974 "name": null, 00:13:04.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.974 "is_configured": false, 00:13:04.974 "data_offset": 0, 00:13:04.974 "data_size": 63488 00:13:04.974 }, 00:13:04.974 { 00:13:04.974 "name": "BaseBdev2", 00:13:04.974 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:04.974 "is_configured": true, 00:13:04.974 "data_offset": 2048, 00:13:04.974 "data_size": 63488 00:13:04.974 } 00:13:04.974 ] 00:13:04.974 }' 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.974 [2024-10-25 17:54:23.266993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.974 [2024-10-25 17:54:23.267196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.974 [2024-10-25 17:54:23.267226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:04.974 [2024-10-25 17:54:23.267239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.974 [2024-10-25 17:54:23.267755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.974 [2024-10-25 17:54:23.267790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.974 [2024-10-25 17:54:23.267902] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:04.974 [2024-10-25 17:54:23.267934] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.974 [2024-10-25 17:54:23.267944] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:04.974 [2024-10-25 17:54:23.267959] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:04.974 BaseBdev1 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.974 17:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.913 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.913 "name": "raid_bdev1", 00:13:05.913 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:05.913 "strip_size_kb": 0, 00:13:05.913 "state": "online", 00:13:05.913 "raid_level": "raid1", 00:13:05.913 "superblock": true, 00:13:05.913 "num_base_bdevs": 2, 00:13:05.913 "num_base_bdevs_discovered": 1, 00:13:05.913 "num_base_bdevs_operational": 1, 00:13:05.913 "base_bdevs_list": [ 00:13:05.913 { 00:13:05.913 "name": null, 00:13:05.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.913 "is_configured": false, 00:13:05.913 "data_offset": 0, 00:13:05.913 "data_size": 63488 00:13:05.914 }, 00:13:05.914 { 00:13:05.914 "name": "BaseBdev2", 00:13:05.914 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:05.914 "is_configured": true, 00:13:05.914 "data_offset": 2048, 00:13:05.914 "data_size": 63488 00:13:05.914 } 00:13:05.914 ] 00:13:05.914 }' 00:13:05.914 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.914 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.481 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.482 "name": "raid_bdev1", 00:13:06.482 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:06.482 "strip_size_kb": 0, 00:13:06.482 "state": "online", 00:13:06.482 "raid_level": "raid1", 00:13:06.482 "superblock": true, 00:13:06.482 "num_base_bdevs": 2, 00:13:06.482 "num_base_bdevs_discovered": 1, 00:13:06.482 "num_base_bdevs_operational": 1, 00:13:06.482 "base_bdevs_list": [ 00:13:06.482 { 00:13:06.482 "name": null, 00:13:06.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.482 "is_configured": false, 00:13:06.482 "data_offset": 0, 00:13:06.482 "data_size": 63488 00:13:06.482 }, 00:13:06.482 { 00:13:06.482 "name": "BaseBdev2", 00:13:06.482 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:06.482 "is_configured": true, 00:13:06.482 "data_offset": 2048, 00:13:06.482 "data_size": 63488 00:13:06.482 } 00:13:06.482 ] 00:13:06.482 }' 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.482 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.482 [2024-10-25 17:54:24.917376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.482 [2024-10-25 17:54:24.917710] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:06.482 [2024-10-25 17:54:24.917776] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:06.741 request: 00:13:06.741 { 00:13:06.741 "base_bdev": "BaseBdev1", 00:13:06.741 "raid_bdev": "raid_bdev1", 00:13:06.741 "method": "bdev_raid_add_base_bdev", 00:13:06.741 "req_id": 1 00:13:06.741 } 00:13:06.741 Got JSON-RPC error response 00:13:06.741 response: 00:13:06.741 { 00:13:06.741 "code": -22, 00:13:06.741 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:06.741 } 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.741 17:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.677 "name": "raid_bdev1", 00:13:07.677 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:07.677 "strip_size_kb": 0, 00:13:07.677 "state": "online", 00:13:07.677 "raid_level": "raid1", 00:13:07.677 "superblock": true, 00:13:07.677 "num_base_bdevs": 2, 00:13:07.677 "num_base_bdevs_discovered": 1, 00:13:07.677 "num_base_bdevs_operational": 1, 00:13:07.677 "base_bdevs_list": [ 00:13:07.677 { 00:13:07.677 "name": null, 00:13:07.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.677 "is_configured": false, 00:13:07.677 "data_offset": 0, 00:13:07.677 "data_size": 63488 00:13:07.677 }, 00:13:07.677 { 00:13:07.677 "name": "BaseBdev2", 00:13:07.677 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:07.677 "is_configured": true, 00:13:07.677 "data_offset": 2048, 00:13:07.677 "data_size": 63488 00:13:07.677 } 00:13:07.677 ] 00:13:07.677 }' 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.677 17:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.247 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.248 "name": "raid_bdev1", 00:13:08.248 "uuid": "c4ee7a59-a529-48ba-9ff9-720984cc8078", 00:13:08.248 "strip_size_kb": 0, 00:13:08.248 "state": "online", 00:13:08.248 "raid_level": "raid1", 00:13:08.248 "superblock": true, 00:13:08.248 "num_base_bdevs": 2, 00:13:08.248 "num_base_bdevs_discovered": 1, 00:13:08.248 "num_base_bdevs_operational": 1, 00:13:08.248 "base_bdevs_list": [ 00:13:08.248 { 00:13:08.248 "name": null, 00:13:08.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.248 "is_configured": false, 00:13:08.248 "data_offset": 0, 00:13:08.248 "data_size": 63488 00:13:08.248 }, 00:13:08.248 { 00:13:08.248 "name": "BaseBdev2", 00:13:08.248 "uuid": "d46fce50-a750-564f-896d-25421c2e570d", 00:13:08.248 "is_configured": true, 00:13:08.248 "data_offset": 2048, 00:13:08.248 "data_size": 63488 00:13:08.248 } 00:13:08.248 ] 00:13:08.248 }' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76656 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76656 ']' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76656 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76656 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76656' 00:13:08.248 killing process with pid 76656 00:13:08.248 Received shutdown signal, test time was about 17.725496 seconds 00:13:08.248 00:13:08.248 Latency(us) 00:13:08.248 [2024-10-25T17:54:26.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.248 [2024-10-25T17:54:26.684Z] =================================================================================================================== 00:13:08.248 [2024-10-25T17:54:26.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76656 00:13:08.248 [2024-10-25 17:54:26.586177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.248 17:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76656 00:13:08.248 [2024-10-25 17:54:26.586351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.248 [2024-10-25 17:54:26.586427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.248 [2024-10-25 17:54:26.586440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:08.507 [2024-10-25 17:54:26.860802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:09.887 00:13:09.887 real 0m21.112s 00:13:09.887 user 0m27.822s 00:13:09.887 sys 0m2.299s 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.887 ************************************ 00:13:09.887 END TEST raid_rebuild_test_sb_io 00:13:09.887 ************************************ 00:13:09.887 17:54:28 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:09.887 17:54:28 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:09.887 17:54:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:09.887 17:54:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.887 17:54:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.887 ************************************ 00:13:09.887 START TEST raid_rebuild_test 00:13:09.887 ************************************ 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77358 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77358 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77358 ']' 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.887 17:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.887 Zero copy mechanism will not be used. 00:13:09.887 [2024-10-25 17:54:28.269894] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:09.887 [2024-10-25 17:54:28.270089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77358 ] 00:13:10.147 [2024-10-25 17:54:28.443542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.147 [2024-10-25 17:54:28.563568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.406 [2024-10-25 17:54:28.764445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.406 [2024-10-25 17:54:28.764498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 BaseBdev1_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 [2024-10-25 17:54:29.154863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.976 [2024-10-25 17:54:29.155041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.976 [2024-10-25 17:54:29.155092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.976 [2024-10-25 17:54:29.155131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.976 [2024-10-25 17:54:29.157436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.976 [2024-10-25 17:54:29.157533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.976 BaseBdev1 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 BaseBdev2_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 [2024-10-25 17:54:29.211587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:10.976 [2024-10-25 17:54:29.211772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.976 [2024-10-25 17:54:29.211819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.976 [2024-10-25 17:54:29.211884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.976 [2024-10-25 17:54:29.214178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.976 [2024-10-25 17:54:29.214270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.976 BaseBdev2 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 BaseBdev3_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 [2024-10-25 17:54:29.279168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:10.976 [2024-10-25 17:54:29.279354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.976 [2024-10-25 17:54:29.279397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:10.976 [2024-10-25 17:54:29.279436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.976 [2024-10-25 17:54:29.281720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.976 BaseBdev3 00:13:10.976 [2024-10-25 17:54:29.281845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 BaseBdev4_malloc 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.976 [2024-10-25 17:54:29.334800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:10.976 [2024-10-25 17:54:29.334985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.976 [2024-10-25 17:54:29.335013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:10.977 [2024-10-25 17:54:29.335025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.977 [2024-10-25 17:54:29.337120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.977 [2024-10-25 17:54:29.337165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:10.977 BaseBdev4 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.977 spare_malloc 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.977 spare_delay 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.977 [2024-10-25 17:54:29.401190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.977 [2024-10-25 17:54:29.401383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.977 [2024-10-25 17:54:29.401429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:10.977 [2024-10-25 17:54:29.401475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.977 [2024-10-25 17:54:29.403690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.977 [2024-10-25 17:54:29.403797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.977 spare 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.977 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 [2024-10-25 17:54:29.413247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.237 [2024-10-25 17:54:29.415205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.237 [2024-10-25 17:54:29.415328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.237 [2024-10-25 17:54:29.415405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.237 [2024-10-25 17:54:29.415560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:11.237 [2024-10-25 17:54:29.415603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:11.237 [2024-10-25 17:54:29.415964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:11.237 [2024-10-25 17:54:29.416211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.237 [2024-10-25 17:54:29.416272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.237 [2024-10-25 17:54:29.416527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.237 "name": "raid_bdev1", 00:13:11.237 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:11.237 "strip_size_kb": 0, 00:13:11.237 "state": "online", 00:13:11.237 "raid_level": "raid1", 00:13:11.237 "superblock": false, 00:13:11.237 "num_base_bdevs": 4, 00:13:11.237 "num_base_bdevs_discovered": 4, 00:13:11.237 "num_base_bdevs_operational": 4, 00:13:11.237 "base_bdevs_list": [ 00:13:11.237 { 00:13:11.237 "name": "BaseBdev1", 00:13:11.237 "uuid": "9d14d1c9-d76a-5e2a-b91c-0bbf6c13bafc", 00:13:11.237 "is_configured": true, 00:13:11.237 "data_offset": 0, 00:13:11.237 "data_size": 65536 00:13:11.237 }, 00:13:11.237 { 00:13:11.237 "name": "BaseBdev2", 00:13:11.237 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:11.237 "is_configured": true, 00:13:11.237 "data_offset": 0, 00:13:11.237 "data_size": 65536 00:13:11.237 }, 00:13:11.237 { 00:13:11.237 "name": "BaseBdev3", 00:13:11.237 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:11.237 "is_configured": true, 00:13:11.237 "data_offset": 0, 00:13:11.237 "data_size": 65536 00:13:11.237 }, 00:13:11.237 { 00:13:11.237 "name": "BaseBdev4", 00:13:11.237 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:11.237 "is_configured": true, 00:13:11.237 "data_offset": 0, 00:13:11.237 "data_size": 65536 00:13:11.237 } 00:13:11.237 ] 00:13:11.237 }' 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.237 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 [2024-10-25 17:54:29.872890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.496 17:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.756 17:54:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.756 [2024-10-25 17:54:30.168171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:11.756 /dev/nbd0 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.014 1+0 records in 00:13:12.014 1+0 records out 00:13:12.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563048 s, 7.3 MB/s 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:12.014 17:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:18.591 65536+0 records in 00:13:18.591 65536+0 records out 00:13:18.591 33554432 bytes (34 MB, 32 MiB) copied, 6.40527 s, 5.2 MB/s 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.591 [2024-10-25 17:54:36.868384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.591 [2024-10-25 17:54:36.908455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.591 "name": "raid_bdev1", 00:13:18.591 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:18.591 "strip_size_kb": 0, 00:13:18.591 "state": "online", 00:13:18.591 "raid_level": "raid1", 00:13:18.591 "superblock": false, 00:13:18.591 "num_base_bdevs": 4, 00:13:18.591 "num_base_bdevs_discovered": 3, 00:13:18.591 "num_base_bdevs_operational": 3, 00:13:18.591 "base_bdevs_list": [ 00:13:18.591 { 00:13:18.591 "name": null, 00:13:18.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.591 "is_configured": false, 00:13:18.591 "data_offset": 0, 00:13:18.591 "data_size": 65536 00:13:18.591 }, 00:13:18.591 { 00:13:18.591 "name": "BaseBdev2", 00:13:18.591 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:18.591 "is_configured": true, 00:13:18.591 "data_offset": 0, 00:13:18.591 "data_size": 65536 00:13:18.591 }, 00:13:18.591 { 00:13:18.591 "name": "BaseBdev3", 00:13:18.591 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:18.591 "is_configured": true, 00:13:18.591 "data_offset": 0, 00:13:18.591 "data_size": 65536 00:13:18.591 }, 00:13:18.591 { 00:13:18.591 "name": "BaseBdev4", 00:13:18.591 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:18.591 "is_configured": true, 00:13:18.591 "data_offset": 0, 00:13:18.591 "data_size": 65536 00:13:18.591 } 00:13:18.591 ] 00:13:18.591 }' 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.591 17:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.160 17:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.160 17:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.161 17:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.161 [2024-10-25 17:54:37.383821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.161 [2024-10-25 17:54:37.400944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:19.161 17:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.161 17:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:19.161 [2024-10-25 17:54:37.403429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.102 "name": "raid_bdev1", 00:13:20.102 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:20.102 "strip_size_kb": 0, 00:13:20.102 "state": "online", 00:13:20.102 "raid_level": "raid1", 00:13:20.102 "superblock": false, 00:13:20.102 "num_base_bdevs": 4, 00:13:20.102 "num_base_bdevs_discovered": 4, 00:13:20.102 "num_base_bdevs_operational": 4, 00:13:20.102 "process": { 00:13:20.102 "type": "rebuild", 00:13:20.102 "target": "spare", 00:13:20.102 "progress": { 00:13:20.102 "blocks": 20480, 00:13:20.102 "percent": 31 00:13:20.102 } 00:13:20.102 }, 00:13:20.102 "base_bdevs_list": [ 00:13:20.102 { 00:13:20.102 "name": "spare", 00:13:20.102 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:20.102 "is_configured": true, 00:13:20.102 "data_offset": 0, 00:13:20.102 "data_size": 65536 00:13:20.102 }, 00:13:20.102 { 00:13:20.102 "name": "BaseBdev2", 00:13:20.102 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:20.102 "is_configured": true, 00:13:20.102 "data_offset": 0, 00:13:20.102 "data_size": 65536 00:13:20.102 }, 00:13:20.102 { 00:13:20.102 "name": "BaseBdev3", 00:13:20.102 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:20.102 "is_configured": true, 00:13:20.102 "data_offset": 0, 00:13:20.102 "data_size": 65536 00:13:20.102 }, 00:13:20.102 { 00:13:20.102 "name": "BaseBdev4", 00:13:20.102 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:20.102 "is_configured": true, 00:13:20.102 "data_offset": 0, 00:13:20.102 "data_size": 65536 00:13:20.102 } 00:13:20.102 ] 00:13:20.102 }' 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.102 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.363 [2024-10-25 17:54:38.554394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.363 [2024-10-25 17:54:38.609394] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.363 [2024-10-25 17:54:38.609591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.363 [2024-10-25 17:54:38.609618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.363 [2024-10-25 17:54:38.609630] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.363 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.363 "name": "raid_bdev1", 00:13:20.363 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:20.363 "strip_size_kb": 0, 00:13:20.363 "state": "online", 00:13:20.363 "raid_level": "raid1", 00:13:20.363 "superblock": false, 00:13:20.363 "num_base_bdevs": 4, 00:13:20.363 "num_base_bdevs_discovered": 3, 00:13:20.363 "num_base_bdevs_operational": 3, 00:13:20.363 "base_bdevs_list": [ 00:13:20.363 { 00:13:20.363 "name": null, 00:13:20.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.363 "is_configured": false, 00:13:20.363 "data_offset": 0, 00:13:20.363 "data_size": 65536 00:13:20.363 }, 00:13:20.363 { 00:13:20.363 "name": "BaseBdev2", 00:13:20.363 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:20.363 "is_configured": true, 00:13:20.363 "data_offset": 0, 00:13:20.363 "data_size": 65536 00:13:20.363 }, 00:13:20.363 { 00:13:20.363 "name": "BaseBdev3", 00:13:20.363 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:20.363 "is_configured": true, 00:13:20.363 "data_offset": 0, 00:13:20.363 "data_size": 65536 00:13:20.363 }, 00:13:20.363 { 00:13:20.363 "name": "BaseBdev4", 00:13:20.364 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:20.364 "is_configured": true, 00:13:20.364 "data_offset": 0, 00:13:20.364 "data_size": 65536 00:13:20.364 } 00:13:20.364 ] 00:13:20.364 }' 00:13:20.364 17:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.364 17:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.962 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.962 "name": "raid_bdev1", 00:13:20.962 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:20.962 "strip_size_kb": 0, 00:13:20.962 "state": "online", 00:13:20.962 "raid_level": "raid1", 00:13:20.962 "superblock": false, 00:13:20.962 "num_base_bdevs": 4, 00:13:20.962 "num_base_bdevs_discovered": 3, 00:13:20.962 "num_base_bdevs_operational": 3, 00:13:20.962 "base_bdevs_list": [ 00:13:20.962 { 00:13:20.962 "name": null, 00:13:20.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.962 "is_configured": false, 00:13:20.962 "data_offset": 0, 00:13:20.962 "data_size": 65536 00:13:20.962 }, 00:13:20.962 { 00:13:20.962 "name": "BaseBdev2", 00:13:20.963 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:20.963 "is_configured": true, 00:13:20.963 "data_offset": 0, 00:13:20.963 "data_size": 65536 00:13:20.963 }, 00:13:20.963 { 00:13:20.963 "name": "BaseBdev3", 00:13:20.963 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:20.963 "is_configured": true, 00:13:20.963 "data_offset": 0, 00:13:20.963 "data_size": 65536 00:13:20.963 }, 00:13:20.963 { 00:13:20.963 "name": "BaseBdev4", 00:13:20.963 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:20.963 "is_configured": true, 00:13:20.963 "data_offset": 0, 00:13:20.963 "data_size": 65536 00:13:20.963 } 00:13:20.963 ] 00:13:20.963 }' 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.963 [2024-10-25 17:54:39.222601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.963 [2024-10-25 17:54:39.238679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.963 17:54:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.963 [2024-10-25 17:54:39.241289] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.941 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.941 "name": "raid_bdev1", 00:13:21.941 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:21.941 "strip_size_kb": 0, 00:13:21.941 "state": "online", 00:13:21.941 "raid_level": "raid1", 00:13:21.941 "superblock": false, 00:13:21.941 "num_base_bdevs": 4, 00:13:21.941 "num_base_bdevs_discovered": 4, 00:13:21.941 "num_base_bdevs_operational": 4, 00:13:21.941 "process": { 00:13:21.941 "type": "rebuild", 00:13:21.941 "target": "spare", 00:13:21.941 "progress": { 00:13:21.941 "blocks": 20480, 00:13:21.942 "percent": 31 00:13:21.942 } 00:13:21.942 }, 00:13:21.942 "base_bdevs_list": [ 00:13:21.942 { 00:13:21.942 "name": "spare", 00:13:21.942 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:21.942 "is_configured": true, 00:13:21.942 "data_offset": 0, 00:13:21.942 "data_size": 65536 00:13:21.942 }, 00:13:21.942 { 00:13:21.942 "name": "BaseBdev2", 00:13:21.942 "uuid": "fc23277e-2cdb-5dda-8616-c4e0b9e429f9", 00:13:21.942 "is_configured": true, 00:13:21.942 "data_offset": 0, 00:13:21.942 "data_size": 65536 00:13:21.942 }, 00:13:21.942 { 00:13:21.942 "name": "BaseBdev3", 00:13:21.942 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:21.942 "is_configured": true, 00:13:21.942 "data_offset": 0, 00:13:21.942 "data_size": 65536 00:13:21.942 }, 00:13:21.942 { 00:13:21.942 "name": "BaseBdev4", 00:13:21.942 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:21.942 "is_configured": true, 00:13:21.942 "data_offset": 0, 00:13:21.942 "data_size": 65536 00:13:21.942 } 00:13:21.942 ] 00:13:21.942 }' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.942 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.201 [2024-10-25 17:54:40.380491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.201 [2024-10-25 17:54:40.447238] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.201 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.202 "name": "raid_bdev1", 00:13:22.202 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:22.202 "strip_size_kb": 0, 00:13:22.202 "state": "online", 00:13:22.202 "raid_level": "raid1", 00:13:22.202 "superblock": false, 00:13:22.202 "num_base_bdevs": 4, 00:13:22.202 "num_base_bdevs_discovered": 3, 00:13:22.202 "num_base_bdevs_operational": 3, 00:13:22.202 "process": { 00:13:22.202 "type": "rebuild", 00:13:22.202 "target": "spare", 00:13:22.202 "progress": { 00:13:22.202 "blocks": 24576, 00:13:22.202 "percent": 37 00:13:22.202 } 00:13:22.202 }, 00:13:22.202 "base_bdevs_list": [ 00:13:22.202 { 00:13:22.202 "name": "spare", 00:13:22.202 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": null, 00:13:22.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.202 "is_configured": false, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": "BaseBdev3", 00:13:22.202 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": "BaseBdev4", 00:13:22.202 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 } 00:13:22.202 ] 00:13:22.202 }' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.202 "name": "raid_bdev1", 00:13:22.202 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:22.202 "strip_size_kb": 0, 00:13:22.202 "state": "online", 00:13:22.202 "raid_level": "raid1", 00:13:22.202 "superblock": false, 00:13:22.202 "num_base_bdevs": 4, 00:13:22.202 "num_base_bdevs_discovered": 3, 00:13:22.202 "num_base_bdevs_operational": 3, 00:13:22.202 "process": { 00:13:22.202 "type": "rebuild", 00:13:22.202 "target": "spare", 00:13:22.202 "progress": { 00:13:22.202 "blocks": 26624, 00:13:22.202 "percent": 40 00:13:22.202 } 00:13:22.202 }, 00:13:22.202 "base_bdevs_list": [ 00:13:22.202 { 00:13:22.202 "name": "spare", 00:13:22.202 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": null, 00:13:22.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.202 "is_configured": false, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": "BaseBdev3", 00:13:22.202 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 }, 00:13:22.202 { 00:13:22.202 "name": "BaseBdev4", 00:13:22.202 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:22.202 "is_configured": true, 00:13:22.202 "data_offset": 0, 00:13:22.202 "data_size": 65536 00:13:22.202 } 00:13:22.202 ] 00:13:22.202 }' 00:13:22.202 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.462 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.462 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.462 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.462 17:54:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.399 "name": "raid_bdev1", 00:13:23.399 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:23.399 "strip_size_kb": 0, 00:13:23.399 "state": "online", 00:13:23.399 "raid_level": "raid1", 00:13:23.399 "superblock": false, 00:13:23.399 "num_base_bdevs": 4, 00:13:23.399 "num_base_bdevs_discovered": 3, 00:13:23.399 "num_base_bdevs_operational": 3, 00:13:23.399 "process": { 00:13:23.399 "type": "rebuild", 00:13:23.399 "target": "spare", 00:13:23.399 "progress": { 00:13:23.399 "blocks": 49152, 00:13:23.399 "percent": 75 00:13:23.399 } 00:13:23.399 }, 00:13:23.399 "base_bdevs_list": [ 00:13:23.399 { 00:13:23.399 "name": "spare", 00:13:23.399 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:23.399 "is_configured": true, 00:13:23.399 "data_offset": 0, 00:13:23.399 "data_size": 65536 00:13:23.399 }, 00:13:23.399 { 00:13:23.399 "name": null, 00:13:23.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.399 "is_configured": false, 00:13:23.399 "data_offset": 0, 00:13:23.399 "data_size": 65536 00:13:23.399 }, 00:13:23.399 { 00:13:23.399 "name": "BaseBdev3", 00:13:23.399 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:23.399 "is_configured": true, 00:13:23.399 "data_offset": 0, 00:13:23.399 "data_size": 65536 00:13:23.399 }, 00:13:23.399 { 00:13:23.399 "name": "BaseBdev4", 00:13:23.399 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:23.399 "is_configured": true, 00:13:23.399 "data_offset": 0, 00:13:23.399 "data_size": 65536 00:13:23.399 } 00:13:23.399 ] 00:13:23.399 }' 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.399 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.658 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.658 17:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.226 [2024-10-25 17:54:42.457146] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:24.226 [2024-10-25 17:54:42.457366] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:24.226 [2024-10-25 17:54:42.457456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.485 17:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.746 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.746 "name": "raid_bdev1", 00:13:24.746 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:24.746 "strip_size_kb": 0, 00:13:24.746 "state": "online", 00:13:24.746 "raid_level": "raid1", 00:13:24.746 "superblock": false, 00:13:24.746 "num_base_bdevs": 4, 00:13:24.746 "num_base_bdevs_discovered": 3, 00:13:24.746 "num_base_bdevs_operational": 3, 00:13:24.746 "base_bdevs_list": [ 00:13:24.746 { 00:13:24.746 "name": "spare", 00:13:24.746 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": null, 00:13:24.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.746 "is_configured": false, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": "BaseBdev3", 00:13:24.746 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": "BaseBdev4", 00:13:24.746 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 } 00:13:24.746 ] 00:13:24.746 }' 00:13:24.746 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.746 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:24.746 17:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.746 "name": "raid_bdev1", 00:13:24.746 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:24.746 "strip_size_kb": 0, 00:13:24.746 "state": "online", 00:13:24.746 "raid_level": "raid1", 00:13:24.746 "superblock": false, 00:13:24.746 "num_base_bdevs": 4, 00:13:24.746 "num_base_bdevs_discovered": 3, 00:13:24.746 "num_base_bdevs_operational": 3, 00:13:24.746 "base_bdevs_list": [ 00:13:24.746 { 00:13:24.746 "name": "spare", 00:13:24.746 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": null, 00:13:24.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.746 "is_configured": false, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": "BaseBdev3", 00:13:24.746 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 }, 00:13:24.746 { 00:13:24.746 "name": "BaseBdev4", 00:13:24.746 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:24.746 "is_configured": true, 00:13:24.746 "data_offset": 0, 00:13:24.746 "data_size": 65536 00:13:24.746 } 00:13:24.746 ] 00:13:24.746 }' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.746 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.006 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.006 "name": "raid_bdev1", 00:13:25.006 "uuid": "167a256d-d8d9-4779-a615-a5809bcd37a9", 00:13:25.006 "strip_size_kb": 0, 00:13:25.006 "state": "online", 00:13:25.006 "raid_level": "raid1", 00:13:25.006 "superblock": false, 00:13:25.006 "num_base_bdevs": 4, 00:13:25.006 "num_base_bdevs_discovered": 3, 00:13:25.006 "num_base_bdevs_operational": 3, 00:13:25.006 "base_bdevs_list": [ 00:13:25.006 { 00:13:25.006 "name": "spare", 00:13:25.006 "uuid": "dd7c8426-d61e-5784-8997-2e2bbdaf42dc", 00:13:25.006 "is_configured": true, 00:13:25.006 "data_offset": 0, 00:13:25.006 "data_size": 65536 00:13:25.006 }, 00:13:25.006 { 00:13:25.006 "name": null, 00:13:25.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.006 "is_configured": false, 00:13:25.006 "data_offset": 0, 00:13:25.006 "data_size": 65536 00:13:25.006 }, 00:13:25.006 { 00:13:25.006 "name": "BaseBdev3", 00:13:25.006 "uuid": "c3cd6f86-6c29-5649-9f30-33edf3133133", 00:13:25.006 "is_configured": true, 00:13:25.006 "data_offset": 0, 00:13:25.006 "data_size": 65536 00:13:25.006 }, 00:13:25.006 { 00:13:25.006 "name": "BaseBdev4", 00:13:25.006 "uuid": "ac4f02c2-2950-54b0-9581-9369629946cf", 00:13:25.006 "is_configured": true, 00:13:25.006 "data_offset": 0, 00:13:25.006 "data_size": 65536 00:13:25.006 } 00:13:25.006 ] 00:13:25.006 }' 00:13:25.006 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.006 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.265 [2024-10-25 17:54:43.595169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.265 [2024-10-25 17:54:43.595223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.265 [2024-10-25 17:54:43.595328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.265 [2024-10-25 17:54:43.595426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.265 [2024-10-25 17:54:43.595438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:25.265 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:25.524 /dev/nbd0 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.524 1+0 records in 00:13:25.524 1+0 records out 00:13:25.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530277 s, 7.7 MB/s 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:25.524 17:54:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:25.784 /dev/nbd1 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.784 1+0 records in 00:13:25.784 1+0 records out 00:13:25.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294065 s, 13.9 MB/s 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:25.784 17:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.042 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.301 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:26.559 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.559 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.559 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.559 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77358 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77358 ']' 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77358 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77358 00:13:26.560 killing process with pid 77358 00:13:26.560 Received shutdown signal, test time was about 60.000000 seconds 00:13:26.560 00:13:26.560 Latency(us) 00:13:26.560 [2024-10-25T17:54:44.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.560 [2024-10-25T17:54:44.996Z] =================================================================================================================== 00:13:26.560 [2024-10-25T17:54:44.996Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77358' 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77358 00:13:26.560 17:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77358 00:13:26.560 [2024-10-25 17:54:44.974318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.129 [2024-10-25 17:54:45.491027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:28.505 00:13:28.505 real 0m18.569s 00:13:28.505 user 0m20.227s 00:13:28.505 sys 0m3.527s 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.505 ************************************ 00:13:28.505 END TEST raid_rebuild_test 00:13:28.505 ************************************ 00:13:28.505 17:54:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:28.505 17:54:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:28.505 17:54:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.505 17:54:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.505 ************************************ 00:13:28.505 START TEST raid_rebuild_test_sb 00:13:28.505 ************************************ 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:28.505 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77828 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77828 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77828 ']' 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.506 17:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.506 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.506 Zero copy mechanism will not be used. 00:13:28.506 [2024-10-25 17:54:46.891412] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:28.506 [2024-10-25 17:54:46.891555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77828 ] 00:13:28.764 [2024-10-25 17:54:47.070153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.764 [2024-10-25 17:54:47.193236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.024 [2024-10-25 17:54:47.413364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.024 [2024-10-25 17:54:47.413438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.592 BaseBdev1_malloc 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.592 [2024-10-25 17:54:47.824428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.592 [2024-10-25 17:54:47.824515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.592 [2024-10-25 17:54:47.824545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.592 [2024-10-25 17:54:47.824562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.592 [2024-10-25 17:54:47.826848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.592 [2024-10-25 17:54:47.826889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.592 BaseBdev1 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.592 BaseBdev2_malloc 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.592 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.593 [2024-10-25 17:54:47.882369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.593 [2024-10-25 17:54:47.882462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.593 [2024-10-25 17:54:47.882489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.593 [2024-10-25 17:54:47.882507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.593 [2024-10-25 17:54:47.884920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.593 [2024-10-25 17:54:47.884964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.593 BaseBdev2 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.593 BaseBdev3_malloc 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.593 [2024-10-25 17:54:47.951939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:29.593 [2024-10-25 17:54:47.952011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.593 [2024-10-25 17:54:47.952037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:29.593 [2024-10-25 17:54:47.952051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.593 [2024-10-25 17:54:47.954211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.593 [2024-10-25 17:54:47.954255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.593 BaseBdev3 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.593 BaseBdev4_malloc 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.593 [2024-10-25 17:54:48.009227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:29.593 [2024-10-25 17:54:48.009295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.593 [2024-10-25 17:54:48.009316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:29.593 [2024-10-25 17:54:48.009331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.593 [2024-10-25 17:54:48.011581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.593 [2024-10-25 17:54:48.011641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:29.593 BaseBdev4 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.593 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 spare_malloc 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 spare_delay 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 [2024-10-25 17:54:48.080372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.853 [2024-10-25 17:54:48.080451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.853 [2024-10-25 17:54:48.080478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:29.853 [2024-10-25 17:54:48.080493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.853 [2024-10-25 17:54:48.082696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.853 [2024-10-25 17:54:48.082737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.853 spare 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 [2024-10-25 17:54:48.092428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.853 [2024-10-25 17:54:48.094288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.853 [2024-10-25 17:54:48.094364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.853 [2024-10-25 17:54:48.094422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.853 [2024-10-25 17:54:48.094624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.853 [2024-10-25 17:54:48.094646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.853 [2024-10-25 17:54:48.094964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:29.853 [2024-10-25 17:54:48.095177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.853 [2024-10-25 17:54:48.095192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.853 [2024-10-25 17:54:48.095374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.853 "name": "raid_bdev1", 00:13:29.853 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:29.853 "strip_size_kb": 0, 00:13:29.853 "state": "online", 00:13:29.853 "raid_level": "raid1", 00:13:29.853 "superblock": true, 00:13:29.853 "num_base_bdevs": 4, 00:13:29.853 "num_base_bdevs_discovered": 4, 00:13:29.853 "num_base_bdevs_operational": 4, 00:13:29.853 "base_bdevs_list": [ 00:13:29.853 { 00:13:29.853 "name": "BaseBdev1", 00:13:29.853 "uuid": "7086cb77-a4a4-5e43-a927-a288407ddb6b", 00:13:29.853 "is_configured": true, 00:13:29.853 "data_offset": 2048, 00:13:29.853 "data_size": 63488 00:13:29.853 }, 00:13:29.853 { 00:13:29.853 "name": "BaseBdev2", 00:13:29.853 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:29.853 "is_configured": true, 00:13:29.853 "data_offset": 2048, 00:13:29.853 "data_size": 63488 00:13:29.853 }, 00:13:29.853 { 00:13:29.853 "name": "BaseBdev3", 00:13:29.853 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:29.853 "is_configured": true, 00:13:29.853 "data_offset": 2048, 00:13:29.853 "data_size": 63488 00:13:29.853 }, 00:13:29.853 { 00:13:29.853 "name": "BaseBdev4", 00:13:29.853 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:29.853 "is_configured": true, 00:13:29.853 "data_offset": 2048, 00:13:29.853 "data_size": 63488 00:13:29.853 } 00:13:29.853 ] 00:13:29.853 }' 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.853 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.113 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.113 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.113 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 [2024-10-25 17:54:48.544064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.372 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:30.631 [2024-10-25 17:54:48.851206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.631 /dev/nbd0 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.631 1+0 records in 00:13:30.631 1+0 records out 00:13:30.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447376 s, 9.2 MB/s 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:30.631 17:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:37.257 63488+0 records in 00:13:37.257 63488+0 records out 00:13:37.257 32505856 bytes (33 MB, 31 MiB) copied, 6.1549 s, 5.3 MB/s 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.257 [2024-10-25 17:54:55.307439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.257 [2024-10-25 17:54:55.347468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.257 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.258 "name": "raid_bdev1", 00:13:37.258 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:37.258 "strip_size_kb": 0, 00:13:37.258 "state": "online", 00:13:37.258 "raid_level": "raid1", 00:13:37.258 "superblock": true, 00:13:37.258 "num_base_bdevs": 4, 00:13:37.258 "num_base_bdevs_discovered": 3, 00:13:37.258 "num_base_bdevs_operational": 3, 00:13:37.258 "base_bdevs_list": [ 00:13:37.258 { 00:13:37.258 "name": null, 00:13:37.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.258 "is_configured": false, 00:13:37.258 "data_offset": 0, 00:13:37.258 "data_size": 63488 00:13:37.258 }, 00:13:37.258 { 00:13:37.258 "name": "BaseBdev2", 00:13:37.258 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:37.258 "is_configured": true, 00:13:37.258 "data_offset": 2048, 00:13:37.258 "data_size": 63488 00:13:37.258 }, 00:13:37.258 { 00:13:37.258 "name": "BaseBdev3", 00:13:37.258 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:37.258 "is_configured": true, 00:13:37.258 "data_offset": 2048, 00:13:37.258 "data_size": 63488 00:13:37.258 }, 00:13:37.258 { 00:13:37.258 "name": "BaseBdev4", 00:13:37.258 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:37.258 "is_configured": true, 00:13:37.258 "data_offset": 2048, 00:13:37.258 "data_size": 63488 00:13:37.258 } 00:13:37.258 ] 00:13:37.258 }' 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.258 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.517 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.517 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.517 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.517 [2024-10-25 17:54:55.842757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.517 [2024-10-25 17:54:55.859573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:37.517 [2024-10-25 17:54:55.861861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.517 17:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.517 17:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.452 17:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.712 17:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.712 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.712 "name": "raid_bdev1", 00:13:38.712 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:38.712 "strip_size_kb": 0, 00:13:38.712 "state": "online", 00:13:38.712 "raid_level": "raid1", 00:13:38.712 "superblock": true, 00:13:38.712 "num_base_bdevs": 4, 00:13:38.712 "num_base_bdevs_discovered": 4, 00:13:38.712 "num_base_bdevs_operational": 4, 00:13:38.712 "process": { 00:13:38.712 "type": "rebuild", 00:13:38.712 "target": "spare", 00:13:38.712 "progress": { 00:13:38.712 "blocks": 20480, 00:13:38.712 "percent": 32 00:13:38.712 } 00:13:38.712 }, 00:13:38.712 "base_bdevs_list": [ 00:13:38.712 { 00:13:38.712 "name": "spare", 00:13:38.712 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev2", 00:13:38.712 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev3", 00:13:38.712 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev4", 00:13:38.712 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 } 00:13:38.712 ] 00:13:38.712 }' 00:13:38.712 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.712 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.712 17:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.712 [2024-10-25 17:54:57.009052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.712 [2024-10-25 17:54:57.068176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.712 [2024-10-25 17:54:57.068274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.712 [2024-10-25 17:54:57.068294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.712 [2024-10-25 17:54:57.068317] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.712 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.712 "name": "raid_bdev1", 00:13:38.712 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:38.712 "strip_size_kb": 0, 00:13:38.712 "state": "online", 00:13:38.712 "raid_level": "raid1", 00:13:38.712 "superblock": true, 00:13:38.712 "num_base_bdevs": 4, 00:13:38.712 "num_base_bdevs_discovered": 3, 00:13:38.712 "num_base_bdevs_operational": 3, 00:13:38.712 "base_bdevs_list": [ 00:13:38.712 { 00:13:38.712 "name": null, 00:13:38.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.712 "is_configured": false, 00:13:38.712 "data_offset": 0, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev2", 00:13:38.712 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev3", 00:13:38.712 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:38.712 "is_configured": true, 00:13:38.712 "data_offset": 2048, 00:13:38.712 "data_size": 63488 00:13:38.712 }, 00:13:38.712 { 00:13:38.712 "name": "BaseBdev4", 00:13:38.713 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:38.713 "is_configured": true, 00:13:38.713 "data_offset": 2048, 00:13:38.713 "data_size": 63488 00:13:38.713 } 00:13:38.713 ] 00:13:38.713 }' 00:13:38.713 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.713 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.280 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.281 "name": "raid_bdev1", 00:13:39.281 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:39.281 "strip_size_kb": 0, 00:13:39.281 "state": "online", 00:13:39.281 "raid_level": "raid1", 00:13:39.281 "superblock": true, 00:13:39.281 "num_base_bdevs": 4, 00:13:39.281 "num_base_bdevs_discovered": 3, 00:13:39.281 "num_base_bdevs_operational": 3, 00:13:39.281 "base_bdevs_list": [ 00:13:39.281 { 00:13:39.281 "name": null, 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 63488 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev2", 00:13:39.281 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:39.281 "is_configured": true, 00:13:39.281 "data_offset": 2048, 00:13:39.281 "data_size": 63488 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev3", 00:13:39.281 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:39.281 "is_configured": true, 00:13:39.281 "data_offset": 2048, 00:13:39.281 "data_size": 63488 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev4", 00:13:39.281 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:39.281 "is_configured": true, 00:13:39.281 "data_offset": 2048, 00:13:39.281 "data_size": 63488 00:13:39.281 } 00:13:39.281 ] 00:13:39.281 }' 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.281 [2024-10-25 17:54:57.672665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.281 [2024-10-25 17:54:57.688645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:39.281 [2024-10-25 17:54:57.690915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.281 17:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.656 "name": "raid_bdev1", 00:13:40.656 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:40.656 "strip_size_kb": 0, 00:13:40.656 "state": "online", 00:13:40.656 "raid_level": "raid1", 00:13:40.656 "superblock": true, 00:13:40.656 "num_base_bdevs": 4, 00:13:40.656 "num_base_bdevs_discovered": 4, 00:13:40.656 "num_base_bdevs_operational": 4, 00:13:40.656 "process": { 00:13:40.656 "type": "rebuild", 00:13:40.656 "target": "spare", 00:13:40.656 "progress": { 00:13:40.656 "blocks": 20480, 00:13:40.656 "percent": 32 00:13:40.656 } 00:13:40.656 }, 00:13:40.656 "base_bdevs_list": [ 00:13:40.656 { 00:13:40.656 "name": "spare", 00:13:40.656 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": "BaseBdev2", 00:13:40.656 "uuid": "7c820adb-4025-592c-a822-b662e693aa8d", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": "BaseBdev3", 00:13:40.656 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": "BaseBdev4", 00:13:40.656 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 } 00:13:40.656 ] 00:13:40.656 }' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:40.656 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.656 [2024-10-25 17:54:58.854107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.656 [2024-10-25 17:54:58.997179] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.656 17:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.656 "name": "raid_bdev1", 00:13:40.656 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:40.656 "strip_size_kb": 0, 00:13:40.656 "state": "online", 00:13:40.656 "raid_level": "raid1", 00:13:40.656 "superblock": true, 00:13:40.656 "num_base_bdevs": 4, 00:13:40.656 "num_base_bdevs_discovered": 3, 00:13:40.656 "num_base_bdevs_operational": 3, 00:13:40.656 "process": { 00:13:40.656 "type": "rebuild", 00:13:40.656 "target": "spare", 00:13:40.656 "progress": { 00:13:40.656 "blocks": 24576, 00:13:40.656 "percent": 38 00:13:40.656 } 00:13:40.656 }, 00:13:40.656 "base_bdevs_list": [ 00:13:40.656 { 00:13:40.656 "name": "spare", 00:13:40.656 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": null, 00:13:40.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.656 "is_configured": false, 00:13:40.656 "data_offset": 0, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": "BaseBdev3", 00:13:40.656 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 }, 00:13:40.656 { 00:13:40.656 "name": "BaseBdev4", 00:13:40.656 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:40.656 "is_configured": true, 00:13:40.656 "data_offset": 2048, 00:13:40.656 "data_size": 63488 00:13:40.656 } 00:13:40.656 ] 00:13:40.656 }' 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.656 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=464 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.915 "name": "raid_bdev1", 00:13:40.915 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:40.915 "strip_size_kb": 0, 00:13:40.915 "state": "online", 00:13:40.915 "raid_level": "raid1", 00:13:40.915 "superblock": true, 00:13:40.915 "num_base_bdevs": 4, 00:13:40.915 "num_base_bdevs_discovered": 3, 00:13:40.915 "num_base_bdevs_operational": 3, 00:13:40.915 "process": { 00:13:40.915 "type": "rebuild", 00:13:40.915 "target": "spare", 00:13:40.915 "progress": { 00:13:40.915 "blocks": 26624, 00:13:40.915 "percent": 41 00:13:40.915 } 00:13:40.915 }, 00:13:40.915 "base_bdevs_list": [ 00:13:40.915 { 00:13:40.915 "name": "spare", 00:13:40.915 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:40.915 "is_configured": true, 00:13:40.915 "data_offset": 2048, 00:13:40.915 "data_size": 63488 00:13:40.915 }, 00:13:40.915 { 00:13:40.915 "name": null, 00:13:40.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.915 "is_configured": false, 00:13:40.915 "data_offset": 0, 00:13:40.915 "data_size": 63488 00:13:40.915 }, 00:13:40.915 { 00:13:40.915 "name": "BaseBdev3", 00:13:40.915 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:40.915 "is_configured": true, 00:13:40.915 "data_offset": 2048, 00:13:40.915 "data_size": 63488 00:13:40.915 }, 00:13:40.915 { 00:13:40.915 "name": "BaseBdev4", 00:13:40.915 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:40.915 "is_configured": true, 00:13:40.915 "data_offset": 2048, 00:13:40.915 "data_size": 63488 00:13:40.915 } 00:13:40.915 ] 00:13:40.915 }' 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.915 17:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.850 17:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.108 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.108 "name": "raid_bdev1", 00:13:42.108 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:42.108 "strip_size_kb": 0, 00:13:42.108 "state": "online", 00:13:42.108 "raid_level": "raid1", 00:13:42.108 "superblock": true, 00:13:42.108 "num_base_bdevs": 4, 00:13:42.108 "num_base_bdevs_discovered": 3, 00:13:42.108 "num_base_bdevs_operational": 3, 00:13:42.109 "process": { 00:13:42.109 "type": "rebuild", 00:13:42.109 "target": "spare", 00:13:42.109 "progress": { 00:13:42.109 "blocks": 49152, 00:13:42.109 "percent": 77 00:13:42.109 } 00:13:42.109 }, 00:13:42.109 "base_bdevs_list": [ 00:13:42.109 { 00:13:42.109 "name": "spare", 00:13:42.109 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:42.109 "is_configured": true, 00:13:42.109 "data_offset": 2048, 00:13:42.109 "data_size": 63488 00:13:42.109 }, 00:13:42.109 { 00:13:42.109 "name": null, 00:13:42.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.109 "is_configured": false, 00:13:42.109 "data_offset": 0, 00:13:42.109 "data_size": 63488 00:13:42.109 }, 00:13:42.109 { 00:13:42.109 "name": "BaseBdev3", 00:13:42.109 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:42.109 "is_configured": true, 00:13:42.109 "data_offset": 2048, 00:13:42.109 "data_size": 63488 00:13:42.109 }, 00:13:42.109 { 00:13:42.109 "name": "BaseBdev4", 00:13:42.109 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:42.109 "is_configured": true, 00:13:42.109 "data_offset": 2048, 00:13:42.109 "data_size": 63488 00:13:42.109 } 00:13:42.109 ] 00:13:42.109 }' 00:13:42.109 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.109 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.109 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.109 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.109 17:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.679 [2024-10-25 17:55:00.907270] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.679 [2024-10-25 17:55:00.907386] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.679 [2024-10-25 17:55:00.907559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.247 "name": "raid_bdev1", 00:13:43.247 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:43.247 "strip_size_kb": 0, 00:13:43.247 "state": "online", 00:13:43.247 "raid_level": "raid1", 00:13:43.247 "superblock": true, 00:13:43.247 "num_base_bdevs": 4, 00:13:43.247 "num_base_bdevs_discovered": 3, 00:13:43.247 "num_base_bdevs_operational": 3, 00:13:43.247 "base_bdevs_list": [ 00:13:43.247 { 00:13:43.247 "name": "spare", 00:13:43.247 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": null, 00:13:43.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.247 "is_configured": false, 00:13:43.247 "data_offset": 0, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": "BaseBdev3", 00:13:43.247 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": "BaseBdev4", 00:13:43.247 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 } 00:13:43.247 ] 00:13:43.247 }' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.247 "name": "raid_bdev1", 00:13:43.247 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:43.247 "strip_size_kb": 0, 00:13:43.247 "state": "online", 00:13:43.247 "raid_level": "raid1", 00:13:43.247 "superblock": true, 00:13:43.247 "num_base_bdevs": 4, 00:13:43.247 "num_base_bdevs_discovered": 3, 00:13:43.247 "num_base_bdevs_operational": 3, 00:13:43.247 "base_bdevs_list": [ 00:13:43.247 { 00:13:43.247 "name": "spare", 00:13:43.247 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": null, 00:13:43.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.247 "is_configured": false, 00:13:43.247 "data_offset": 0, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": "BaseBdev3", 00:13:43.247 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 }, 00:13:43.247 { 00:13:43.247 "name": "BaseBdev4", 00:13:43.247 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:43.247 "is_configured": true, 00:13:43.247 "data_offset": 2048, 00:13:43.247 "data_size": 63488 00:13:43.247 } 00:13:43.247 ] 00:13:43.247 }' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.247 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.507 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.508 "name": "raid_bdev1", 00:13:43.508 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:43.508 "strip_size_kb": 0, 00:13:43.508 "state": "online", 00:13:43.508 "raid_level": "raid1", 00:13:43.508 "superblock": true, 00:13:43.508 "num_base_bdevs": 4, 00:13:43.508 "num_base_bdevs_discovered": 3, 00:13:43.508 "num_base_bdevs_operational": 3, 00:13:43.508 "base_bdevs_list": [ 00:13:43.508 { 00:13:43.508 "name": "spare", 00:13:43.508 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:43.508 "is_configured": true, 00:13:43.508 "data_offset": 2048, 00:13:43.508 "data_size": 63488 00:13:43.508 }, 00:13:43.508 { 00:13:43.508 "name": null, 00:13:43.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.508 "is_configured": false, 00:13:43.508 "data_offset": 0, 00:13:43.508 "data_size": 63488 00:13:43.508 }, 00:13:43.508 { 00:13:43.508 "name": "BaseBdev3", 00:13:43.508 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:43.508 "is_configured": true, 00:13:43.508 "data_offset": 2048, 00:13:43.508 "data_size": 63488 00:13:43.508 }, 00:13:43.508 { 00:13:43.508 "name": "BaseBdev4", 00:13:43.508 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:43.508 "is_configured": true, 00:13:43.508 "data_offset": 2048, 00:13:43.508 "data_size": 63488 00:13:43.508 } 00:13:43.508 ] 00:13:43.508 }' 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.508 17:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 [2024-10-25 17:55:02.128471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.768 [2024-10-25 17:55:02.128524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.768 [2024-10-25 17:55:02.128634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.768 [2024-10-25 17:55:02.128774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.768 [2024-10-25 17:55:02.128801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.768 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:44.027 /dev/nbd0 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.027 1+0 records in 00:13:44.027 1+0 records out 00:13:44.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270503 s, 15.1 MB/s 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.027 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:44.286 /dev/nbd1 00:13:44.545 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.545 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.545 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:44.545 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.546 1+0 records in 00:13:44.546 1+0 records out 00:13:44.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479279 s, 8.5 MB/s 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.546 17:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.804 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.063 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.063 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.063 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.323 [2024-10-25 17:55:03.522521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.323 [2024-10-25 17:55:03.522616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.323 [2024-10-25 17:55:03.522643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:45.323 [2024-10-25 17:55:03.522655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.323 [2024-10-25 17:55:03.525352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.323 [2024-10-25 17:55:03.525406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.323 [2024-10-25 17:55:03.525529] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.323 [2024-10-25 17:55:03.525603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.323 [2024-10-25 17:55:03.525781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.323 [2024-10-25 17:55:03.525924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.323 spare 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.323 [2024-10-25 17:55:03.625865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:45.323 [2024-10-25 17:55:03.625918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.323 [2024-10-25 17:55:03.626364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:45.323 [2024-10-25 17:55:03.626592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:45.323 [2024-10-25 17:55:03.626611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:45.323 [2024-10-25 17:55:03.626880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.323 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.323 "name": "raid_bdev1", 00:13:45.323 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:45.323 "strip_size_kb": 0, 00:13:45.323 "state": "online", 00:13:45.323 "raid_level": "raid1", 00:13:45.323 "superblock": true, 00:13:45.323 "num_base_bdevs": 4, 00:13:45.323 "num_base_bdevs_discovered": 3, 00:13:45.323 "num_base_bdevs_operational": 3, 00:13:45.323 "base_bdevs_list": [ 00:13:45.323 { 00:13:45.323 "name": "spare", 00:13:45.323 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:45.323 "is_configured": true, 00:13:45.323 "data_offset": 2048, 00:13:45.323 "data_size": 63488 00:13:45.323 }, 00:13:45.323 { 00:13:45.323 "name": null, 00:13:45.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.323 "is_configured": false, 00:13:45.324 "data_offset": 2048, 00:13:45.324 "data_size": 63488 00:13:45.324 }, 00:13:45.324 { 00:13:45.324 "name": "BaseBdev3", 00:13:45.324 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:45.324 "is_configured": true, 00:13:45.324 "data_offset": 2048, 00:13:45.324 "data_size": 63488 00:13:45.324 }, 00:13:45.324 { 00:13:45.324 "name": "BaseBdev4", 00:13:45.324 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:45.324 "is_configured": true, 00:13:45.324 "data_offset": 2048, 00:13:45.324 "data_size": 63488 00:13:45.324 } 00:13:45.324 ] 00:13:45.324 }' 00:13:45.324 17:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.324 17:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.893 "name": "raid_bdev1", 00:13:45.893 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:45.893 "strip_size_kb": 0, 00:13:45.893 "state": "online", 00:13:45.893 "raid_level": "raid1", 00:13:45.893 "superblock": true, 00:13:45.893 "num_base_bdevs": 4, 00:13:45.893 "num_base_bdevs_discovered": 3, 00:13:45.893 "num_base_bdevs_operational": 3, 00:13:45.893 "base_bdevs_list": [ 00:13:45.893 { 00:13:45.893 "name": "spare", 00:13:45.893 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:45.893 "is_configured": true, 00:13:45.893 "data_offset": 2048, 00:13:45.893 "data_size": 63488 00:13:45.893 }, 00:13:45.893 { 00:13:45.893 "name": null, 00:13:45.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.893 "is_configured": false, 00:13:45.893 "data_offset": 2048, 00:13:45.893 "data_size": 63488 00:13:45.893 }, 00:13:45.893 { 00:13:45.893 "name": "BaseBdev3", 00:13:45.893 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:45.893 "is_configured": true, 00:13:45.893 "data_offset": 2048, 00:13:45.893 "data_size": 63488 00:13:45.893 }, 00:13:45.893 { 00:13:45.893 "name": "BaseBdev4", 00:13:45.893 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:45.893 "is_configured": true, 00:13:45.893 "data_offset": 2048, 00:13:45.893 "data_size": 63488 00:13:45.893 } 00:13:45.893 ] 00:13:45.893 }' 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.893 [2024-10-25 17:55:04.313722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.893 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.153 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.153 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.153 "name": "raid_bdev1", 00:13:46.153 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:46.153 "strip_size_kb": 0, 00:13:46.153 "state": "online", 00:13:46.153 "raid_level": "raid1", 00:13:46.153 "superblock": true, 00:13:46.153 "num_base_bdevs": 4, 00:13:46.153 "num_base_bdevs_discovered": 2, 00:13:46.153 "num_base_bdevs_operational": 2, 00:13:46.153 "base_bdevs_list": [ 00:13:46.153 { 00:13:46.153 "name": null, 00:13:46.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.153 "is_configured": false, 00:13:46.153 "data_offset": 0, 00:13:46.153 "data_size": 63488 00:13:46.153 }, 00:13:46.153 { 00:13:46.153 "name": null, 00:13:46.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.153 "is_configured": false, 00:13:46.153 "data_offset": 2048, 00:13:46.153 "data_size": 63488 00:13:46.153 }, 00:13:46.153 { 00:13:46.153 "name": "BaseBdev3", 00:13:46.153 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:46.153 "is_configured": true, 00:13:46.153 "data_offset": 2048, 00:13:46.153 "data_size": 63488 00:13:46.153 }, 00:13:46.153 { 00:13:46.153 "name": "BaseBdev4", 00:13:46.153 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:46.153 "is_configured": true, 00:13:46.153 "data_offset": 2048, 00:13:46.153 "data_size": 63488 00:13:46.153 } 00:13:46.153 ] 00:13:46.153 }' 00:13:46.153 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.153 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.412 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.412 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.412 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.412 [2024-10-25 17:55:04.784998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.412 [2024-10-25 17:55:04.785314] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:46.412 [2024-10-25 17:55:04.785382] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:46.412 [2024-10-25 17:55:04.785440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.412 [2024-10-25 17:55:04.802950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:46.412 17:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.412 17:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:46.412 [2024-10-25 17:55:04.805263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.446 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.446 "name": "raid_bdev1", 00:13:47.446 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:47.446 "strip_size_kb": 0, 00:13:47.446 "state": "online", 00:13:47.446 "raid_level": "raid1", 00:13:47.446 "superblock": true, 00:13:47.446 "num_base_bdevs": 4, 00:13:47.446 "num_base_bdevs_discovered": 3, 00:13:47.447 "num_base_bdevs_operational": 3, 00:13:47.447 "process": { 00:13:47.447 "type": "rebuild", 00:13:47.447 "target": "spare", 00:13:47.447 "progress": { 00:13:47.447 "blocks": 20480, 00:13:47.447 "percent": 32 00:13:47.447 } 00:13:47.447 }, 00:13:47.447 "base_bdevs_list": [ 00:13:47.447 { 00:13:47.447 "name": "spare", 00:13:47.447 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:47.447 "is_configured": true, 00:13:47.447 "data_offset": 2048, 00:13:47.447 "data_size": 63488 00:13:47.447 }, 00:13:47.447 { 00:13:47.447 "name": null, 00:13:47.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.447 "is_configured": false, 00:13:47.447 "data_offset": 2048, 00:13:47.447 "data_size": 63488 00:13:47.447 }, 00:13:47.447 { 00:13:47.447 "name": "BaseBdev3", 00:13:47.447 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:47.447 "is_configured": true, 00:13:47.447 "data_offset": 2048, 00:13:47.447 "data_size": 63488 00:13:47.447 }, 00:13:47.447 { 00:13:47.447 "name": "BaseBdev4", 00:13:47.447 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:47.447 "is_configured": true, 00:13:47.447 "data_offset": 2048, 00:13:47.447 "data_size": 63488 00:13:47.447 } 00:13:47.447 ] 00:13:47.447 }' 00:13:47.447 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.705 17:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 [2024-10-25 17:55:05.960701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.705 [2024-10-25 17:55:06.011768] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.705 [2024-10-25 17:55:06.012012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.705 [2024-10-25 17:55:06.012090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.705 [2024-10-25 17:55:06.012110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.705 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.706 "name": "raid_bdev1", 00:13:47.706 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:47.706 "strip_size_kb": 0, 00:13:47.706 "state": "online", 00:13:47.706 "raid_level": "raid1", 00:13:47.706 "superblock": true, 00:13:47.706 "num_base_bdevs": 4, 00:13:47.706 "num_base_bdevs_discovered": 2, 00:13:47.706 "num_base_bdevs_operational": 2, 00:13:47.706 "base_bdevs_list": [ 00:13:47.706 { 00:13:47.706 "name": null, 00:13:47.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.706 "is_configured": false, 00:13:47.706 "data_offset": 0, 00:13:47.706 "data_size": 63488 00:13:47.706 }, 00:13:47.706 { 00:13:47.706 "name": null, 00:13:47.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.706 "is_configured": false, 00:13:47.706 "data_offset": 2048, 00:13:47.706 "data_size": 63488 00:13:47.706 }, 00:13:47.706 { 00:13:47.706 "name": "BaseBdev3", 00:13:47.706 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:47.706 "is_configured": true, 00:13:47.706 "data_offset": 2048, 00:13:47.706 "data_size": 63488 00:13:47.706 }, 00:13:47.706 { 00:13:47.706 "name": "BaseBdev4", 00:13:47.706 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:47.706 "is_configured": true, 00:13:47.706 "data_offset": 2048, 00:13:47.706 "data_size": 63488 00:13:47.706 } 00:13:47.706 ] 00:13:47.706 }' 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.706 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.271 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.271 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 [2024-10-25 17:55:06.518564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.271 [2024-10-25 17:55:06.518747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.271 [2024-10-25 17:55:06.518801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:48.271 [2024-10-25 17:55:06.518817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.271 [2024-10-25 17:55:06.519502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.271 [2024-10-25 17:55:06.519543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.271 [2024-10-25 17:55:06.519705] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.271 [2024-10-25 17:55:06.519728] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:48.271 [2024-10-25 17:55:06.519751] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.271 [2024-10-25 17:55:06.519810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.271 spare 00:13:48.271 [2024-10-25 17:55:06.538081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:48.271 17:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.272 17:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:48.272 [2024-10-25 17:55:06.540484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.202 "name": "raid_bdev1", 00:13:49.202 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:49.202 "strip_size_kb": 0, 00:13:49.202 "state": "online", 00:13:49.202 "raid_level": "raid1", 00:13:49.202 "superblock": true, 00:13:49.202 "num_base_bdevs": 4, 00:13:49.202 "num_base_bdevs_discovered": 3, 00:13:49.202 "num_base_bdevs_operational": 3, 00:13:49.202 "process": { 00:13:49.202 "type": "rebuild", 00:13:49.202 "target": "spare", 00:13:49.202 "progress": { 00:13:49.202 "blocks": 20480, 00:13:49.202 "percent": 32 00:13:49.202 } 00:13:49.202 }, 00:13:49.202 "base_bdevs_list": [ 00:13:49.202 { 00:13:49.202 "name": "spare", 00:13:49.202 "uuid": "b821beb3-3a07-5d07-9ea6-4fc51205fab7", 00:13:49.202 "is_configured": true, 00:13:49.202 "data_offset": 2048, 00:13:49.202 "data_size": 63488 00:13:49.202 }, 00:13:49.202 { 00:13:49.202 "name": null, 00:13:49.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.202 "is_configured": false, 00:13:49.202 "data_offset": 2048, 00:13:49.202 "data_size": 63488 00:13:49.202 }, 00:13:49.202 { 00:13:49.202 "name": "BaseBdev3", 00:13:49.202 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:49.202 "is_configured": true, 00:13:49.202 "data_offset": 2048, 00:13:49.202 "data_size": 63488 00:13:49.202 }, 00:13:49.202 { 00:13:49.202 "name": "BaseBdev4", 00:13:49.202 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:49.202 "is_configured": true, 00:13:49.202 "data_offset": 2048, 00:13:49.202 "data_size": 63488 00:13:49.202 } 00:13:49.202 ] 00:13:49.202 }' 00:13:49.202 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 [2024-10-25 17:55:07.704603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.460 [2024-10-25 17:55:07.747335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.460 [2024-10-25 17:55:07.747559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.460 [2024-10-25 17:55:07.747620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.460 [2024-10-25 17:55:07.747660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.460 "name": "raid_bdev1", 00:13:49.460 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:49.460 "strip_size_kb": 0, 00:13:49.460 "state": "online", 00:13:49.460 "raid_level": "raid1", 00:13:49.460 "superblock": true, 00:13:49.460 "num_base_bdevs": 4, 00:13:49.460 "num_base_bdevs_discovered": 2, 00:13:49.460 "num_base_bdevs_operational": 2, 00:13:49.460 "base_bdevs_list": [ 00:13:49.460 { 00:13:49.460 "name": null, 00:13:49.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.460 "is_configured": false, 00:13:49.460 "data_offset": 0, 00:13:49.460 "data_size": 63488 00:13:49.460 }, 00:13:49.460 { 00:13:49.460 "name": null, 00:13:49.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.460 "is_configured": false, 00:13:49.460 "data_offset": 2048, 00:13:49.460 "data_size": 63488 00:13:49.460 }, 00:13:49.460 { 00:13:49.460 "name": "BaseBdev3", 00:13:49.460 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:49.460 "is_configured": true, 00:13:49.460 "data_offset": 2048, 00:13:49.460 "data_size": 63488 00:13:49.460 }, 00:13:49.460 { 00:13:49.460 "name": "BaseBdev4", 00:13:49.460 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:49.460 "is_configured": true, 00:13:49.460 "data_offset": 2048, 00:13:49.460 "data_size": 63488 00:13:49.460 } 00:13:49.460 ] 00:13:49.460 }' 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.460 17:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.027 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.028 "name": "raid_bdev1", 00:13:50.028 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:50.028 "strip_size_kb": 0, 00:13:50.028 "state": "online", 00:13:50.028 "raid_level": "raid1", 00:13:50.028 "superblock": true, 00:13:50.028 "num_base_bdevs": 4, 00:13:50.028 "num_base_bdevs_discovered": 2, 00:13:50.028 "num_base_bdevs_operational": 2, 00:13:50.028 "base_bdevs_list": [ 00:13:50.028 { 00:13:50.028 "name": null, 00:13:50.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.028 "is_configured": false, 00:13:50.028 "data_offset": 0, 00:13:50.028 "data_size": 63488 00:13:50.028 }, 00:13:50.028 { 00:13:50.028 "name": null, 00:13:50.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.028 "is_configured": false, 00:13:50.028 "data_offset": 2048, 00:13:50.028 "data_size": 63488 00:13:50.028 }, 00:13:50.028 { 00:13:50.028 "name": "BaseBdev3", 00:13:50.028 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:50.028 "is_configured": true, 00:13:50.028 "data_offset": 2048, 00:13:50.028 "data_size": 63488 00:13:50.028 }, 00:13:50.028 { 00:13:50.028 "name": "BaseBdev4", 00:13:50.028 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:50.028 "is_configured": true, 00:13:50.028 "data_offset": 2048, 00:13:50.028 "data_size": 63488 00:13:50.028 } 00:13:50.028 ] 00:13:50.028 }' 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.028 [2024-10-25 17:55:08.367376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.028 [2024-10-25 17:55:08.367469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.028 [2024-10-25 17:55:08.367494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:50.028 [2024-10-25 17:55:08.367507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.028 [2024-10-25 17:55:08.368061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.028 [2024-10-25 17:55:08.368109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.028 [2024-10-25 17:55:08.368237] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:50.028 [2024-10-25 17:55:08.368262] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:50.028 [2024-10-25 17:55:08.368272] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:50.028 [2024-10-25 17:55:08.368332] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:50.028 BaseBdev1 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.028 17:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:50.964 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.964 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.964 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.964 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.964 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.965 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.224 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.224 "name": "raid_bdev1", 00:13:51.224 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:51.224 "strip_size_kb": 0, 00:13:51.224 "state": "online", 00:13:51.224 "raid_level": "raid1", 00:13:51.224 "superblock": true, 00:13:51.224 "num_base_bdevs": 4, 00:13:51.224 "num_base_bdevs_discovered": 2, 00:13:51.224 "num_base_bdevs_operational": 2, 00:13:51.224 "base_bdevs_list": [ 00:13:51.224 { 00:13:51.224 "name": null, 00:13:51.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.224 "is_configured": false, 00:13:51.224 "data_offset": 0, 00:13:51.224 "data_size": 63488 00:13:51.224 }, 00:13:51.224 { 00:13:51.224 "name": null, 00:13:51.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.224 "is_configured": false, 00:13:51.224 "data_offset": 2048, 00:13:51.224 "data_size": 63488 00:13:51.224 }, 00:13:51.224 { 00:13:51.224 "name": "BaseBdev3", 00:13:51.224 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:51.224 "is_configured": true, 00:13:51.224 "data_offset": 2048, 00:13:51.224 "data_size": 63488 00:13:51.224 }, 00:13:51.224 { 00:13:51.224 "name": "BaseBdev4", 00:13:51.224 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:51.224 "is_configured": true, 00:13:51.224 "data_offset": 2048, 00:13:51.224 "data_size": 63488 00:13:51.224 } 00:13:51.224 ] 00:13:51.224 }' 00:13:51.224 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.224 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.483 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.483 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.483 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.483 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.484 "name": "raid_bdev1", 00:13:51.484 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:51.484 "strip_size_kb": 0, 00:13:51.484 "state": "online", 00:13:51.484 "raid_level": "raid1", 00:13:51.484 "superblock": true, 00:13:51.484 "num_base_bdevs": 4, 00:13:51.484 "num_base_bdevs_discovered": 2, 00:13:51.484 "num_base_bdevs_operational": 2, 00:13:51.484 "base_bdevs_list": [ 00:13:51.484 { 00:13:51.484 "name": null, 00:13:51.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.484 "is_configured": false, 00:13:51.484 "data_offset": 0, 00:13:51.484 "data_size": 63488 00:13:51.484 }, 00:13:51.484 { 00:13:51.484 "name": null, 00:13:51.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.484 "is_configured": false, 00:13:51.484 "data_offset": 2048, 00:13:51.484 "data_size": 63488 00:13:51.484 }, 00:13:51.484 { 00:13:51.484 "name": "BaseBdev3", 00:13:51.484 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:51.484 "is_configured": true, 00:13:51.484 "data_offset": 2048, 00:13:51.484 "data_size": 63488 00:13:51.484 }, 00:13:51.484 { 00:13:51.484 "name": "BaseBdev4", 00:13:51.484 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:51.484 "is_configured": true, 00:13:51.484 "data_offset": 2048, 00:13:51.484 "data_size": 63488 00:13:51.484 } 00:13:51.484 ] 00:13:51.484 }' 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.484 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.743 [2024-10-25 17:55:09.968823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.743 [2024-10-25 17:55:09.969190] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:51.743 [2024-10-25 17:55:09.969270] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:51.743 request: 00:13:51.743 { 00:13:51.743 "base_bdev": "BaseBdev1", 00:13:51.743 "raid_bdev": "raid_bdev1", 00:13:51.743 "method": "bdev_raid_add_base_bdev", 00:13:51.743 "req_id": 1 00:13:51.743 } 00:13:51.743 Got JSON-RPC error response 00:13:51.743 response: 00:13:51.743 { 00:13:51.743 "code": -22, 00:13:51.743 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:51.743 } 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.743 17:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.679 17:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.679 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.679 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.679 "name": "raid_bdev1", 00:13:52.679 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:52.679 "strip_size_kb": 0, 00:13:52.679 "state": "online", 00:13:52.679 "raid_level": "raid1", 00:13:52.679 "superblock": true, 00:13:52.679 "num_base_bdevs": 4, 00:13:52.679 "num_base_bdevs_discovered": 2, 00:13:52.679 "num_base_bdevs_operational": 2, 00:13:52.679 "base_bdevs_list": [ 00:13:52.679 { 00:13:52.679 "name": null, 00:13:52.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.679 "is_configured": false, 00:13:52.679 "data_offset": 0, 00:13:52.679 "data_size": 63488 00:13:52.679 }, 00:13:52.679 { 00:13:52.679 "name": null, 00:13:52.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.679 "is_configured": false, 00:13:52.679 "data_offset": 2048, 00:13:52.679 "data_size": 63488 00:13:52.679 }, 00:13:52.679 { 00:13:52.679 "name": "BaseBdev3", 00:13:52.679 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:52.679 "is_configured": true, 00:13:52.679 "data_offset": 2048, 00:13:52.679 "data_size": 63488 00:13:52.679 }, 00:13:52.679 { 00:13:52.679 "name": "BaseBdev4", 00:13:52.679 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:52.679 "is_configured": true, 00:13:52.679 "data_offset": 2048, 00:13:52.679 "data_size": 63488 00:13:52.679 } 00:13:52.679 ] 00:13:52.679 }' 00:13:52.679 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.679 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.245 "name": "raid_bdev1", 00:13:53.245 "uuid": "81cc650d-13dc-4a9c-88fb-c6ce8560bc23", 00:13:53.245 "strip_size_kb": 0, 00:13:53.245 "state": "online", 00:13:53.245 "raid_level": "raid1", 00:13:53.245 "superblock": true, 00:13:53.245 "num_base_bdevs": 4, 00:13:53.245 "num_base_bdevs_discovered": 2, 00:13:53.245 "num_base_bdevs_operational": 2, 00:13:53.245 "base_bdevs_list": [ 00:13:53.245 { 00:13:53.245 "name": null, 00:13:53.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.245 "is_configured": false, 00:13:53.245 "data_offset": 0, 00:13:53.245 "data_size": 63488 00:13:53.245 }, 00:13:53.245 { 00:13:53.245 "name": null, 00:13:53.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.245 "is_configured": false, 00:13:53.245 "data_offset": 2048, 00:13:53.245 "data_size": 63488 00:13:53.245 }, 00:13:53.245 { 00:13:53.245 "name": "BaseBdev3", 00:13:53.245 "uuid": "fbebbbdd-cc1d-5b66-bc6e-ca7440d5b8b8", 00:13:53.245 "is_configured": true, 00:13:53.245 "data_offset": 2048, 00:13:53.245 "data_size": 63488 00:13:53.245 }, 00:13:53.245 { 00:13:53.245 "name": "BaseBdev4", 00:13:53.245 "uuid": "0485dd35-af99-559d-b035-dcec92dc5c93", 00:13:53.245 "is_configured": true, 00:13:53.245 "data_offset": 2048, 00:13:53.245 "data_size": 63488 00:13:53.245 } 00:13:53.245 ] 00:13:53.245 }' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77828 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77828 ']' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 77828 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77828 00:13:53.245 killing process with pid 77828 00:13:53.245 Received shutdown signal, test time was about 60.000000 seconds 00:13:53.245 00:13:53.245 Latency(us) 00:13:53.245 [2024-10-25T17:55:11.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.245 [2024-10-25T17:55:11.681Z] =================================================================================================================== 00:13:53.245 [2024-10-25T17:55:11.681Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77828' 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 77828 00:13:53.245 [2024-10-25 17:55:11.642607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.245 17:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 77828 00:13:53.245 [2024-10-25 17:55:11.642755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.245 [2024-10-25 17:55:11.642850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.245 [2024-10-25 17:55:11.642863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:53.812 [2024-10-25 17:55:12.219292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:55.186 00:13:55.186 real 0m26.745s 00:13:55.186 user 0m31.621s 00:13:55.186 sys 0m3.874s 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.186 ************************************ 00:13:55.186 END TEST raid_rebuild_test_sb 00:13:55.186 ************************************ 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.186 17:55:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:55.186 17:55:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:55.186 17:55:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.186 17:55:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.186 ************************************ 00:13:55.186 START TEST raid_rebuild_test_io 00:13:55.186 ************************************ 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78627 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78627 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78627 ']' 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.186 17:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.445 [2024-10-25 17:55:13.711263] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:55.445 [2024-10-25 17:55:13.711587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78627 ] 00:13:55.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.445 Zero copy mechanism will not be used. 00:13:55.704 [2024-10-25 17:55:13.883970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.704 [2024-10-25 17:55:14.017661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.962 [2024-10-25 17:55:14.254157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.962 [2024-10-25 17:55:14.254237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.220 BaseBdev1_malloc 00:13:56.220 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.221 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.221 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.221 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 [2024-10-25 17:55:14.658518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.479 [2024-10-25 17:55:14.658718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.479 [2024-10-25 17:55:14.658755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:56.479 [2024-10-25 17:55:14.658769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.479 [2024-10-25 17:55:14.661439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.479 [2024-10-25 17:55:14.661499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.479 BaseBdev1 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 BaseBdev2_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 [2024-10-25 17:55:14.728977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:56.479 [2024-10-25 17:55:14.729188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.479 [2024-10-25 17:55:14.729238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.479 [2024-10-25 17:55:14.729284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.479 [2024-10-25 17:55:14.731979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.479 [2024-10-25 17:55:14.732097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.479 BaseBdev2 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 BaseBdev3_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 [2024-10-25 17:55:14.800037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:56.479 [2024-10-25 17:55:14.800175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.479 [2024-10-25 17:55:14.800227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.479 [2024-10-25 17:55:14.800271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.479 [2024-10-25 17:55:14.802916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.479 [2024-10-25 17:55:14.803022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:56.479 BaseBdev3 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 BaseBdev4_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 [2024-10-25 17:55:14.850002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:56.479 [2024-10-25 17:55:14.850130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.479 [2024-10-25 17:55:14.850175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:56.479 [2024-10-25 17:55:14.850215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.479 [2024-10-25 17:55:14.852763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.479 [2024-10-25 17:55:14.852898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:56.479 BaseBdev4 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 spare_malloc 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.479 spare_delay 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.479 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.738 [2024-10-25 17:55:14.915823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.738 [2024-10-25 17:55:14.916005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.738 [2024-10-25 17:55:14.916059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:56.738 [2024-10-25 17:55:14.916104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.738 [2024-10-25 17:55:14.918817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.738 [2024-10-25 17:55:14.918968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.738 spare 00:13:56.738 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.738 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:56.738 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.738 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.738 [2024-10-25 17:55:14.924043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.738 [2024-10-25 17:55:14.926358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.738 [2024-10-25 17:55:14.926525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.738 [2024-10-25 17:55:14.926629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.738 [2024-10-25 17:55:14.926758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:56.738 [2024-10-25 17:55:14.926773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:56.738 [2024-10-25 17:55:14.927153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:56.738 [2024-10-25 17:55:14.927380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:56.738 [2024-10-25 17:55:14.927395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:56.738 [2024-10-25 17:55:14.927607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.738 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.739 "name": "raid_bdev1", 00:13:56.739 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:13:56.739 "strip_size_kb": 0, 00:13:56.739 "state": "online", 00:13:56.739 "raid_level": "raid1", 00:13:56.739 "superblock": false, 00:13:56.739 "num_base_bdevs": 4, 00:13:56.739 "num_base_bdevs_discovered": 4, 00:13:56.739 "num_base_bdevs_operational": 4, 00:13:56.739 "base_bdevs_list": [ 00:13:56.739 { 00:13:56.739 "name": "BaseBdev1", 00:13:56.739 "uuid": "adba9bd5-46a9-5e98-817d-d87b061624e4", 00:13:56.739 "is_configured": true, 00:13:56.739 "data_offset": 0, 00:13:56.739 "data_size": 65536 00:13:56.739 }, 00:13:56.739 { 00:13:56.739 "name": "BaseBdev2", 00:13:56.739 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:13:56.739 "is_configured": true, 00:13:56.739 "data_offset": 0, 00:13:56.739 "data_size": 65536 00:13:56.739 }, 00:13:56.739 { 00:13:56.739 "name": "BaseBdev3", 00:13:56.739 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:13:56.739 "is_configured": true, 00:13:56.739 "data_offset": 0, 00:13:56.739 "data_size": 65536 00:13:56.739 }, 00:13:56.739 { 00:13:56.739 "name": "BaseBdev4", 00:13:56.739 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:13:56.739 "is_configured": true, 00:13:56.739 "data_offset": 0, 00:13:56.739 "data_size": 65536 00:13:56.739 } 00:13:56.739 ] 00:13:56.739 }' 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.739 17:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.997 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.997 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.997 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.997 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.997 [2024-10-25 17:55:15.395588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.997 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.255 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 [2024-10-25 17:55:15.491014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.256 "name": "raid_bdev1", 00:13:57.256 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:13:57.256 "strip_size_kb": 0, 00:13:57.256 "state": "online", 00:13:57.256 "raid_level": "raid1", 00:13:57.256 "superblock": false, 00:13:57.256 "num_base_bdevs": 4, 00:13:57.256 "num_base_bdevs_discovered": 3, 00:13:57.256 "num_base_bdevs_operational": 3, 00:13:57.256 "base_bdevs_list": [ 00:13:57.256 { 00:13:57.256 "name": null, 00:13:57.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.256 "is_configured": false, 00:13:57.256 "data_offset": 0, 00:13:57.256 "data_size": 65536 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "name": "BaseBdev2", 00:13:57.256 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:13:57.256 "is_configured": true, 00:13:57.256 "data_offset": 0, 00:13:57.256 "data_size": 65536 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "name": "BaseBdev3", 00:13:57.256 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:13:57.256 "is_configured": true, 00:13:57.256 "data_offset": 0, 00:13:57.256 "data_size": 65536 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "name": "BaseBdev4", 00:13:57.256 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:13:57.256 "is_configured": true, 00:13:57.256 "data_offset": 0, 00:13:57.256 "data_size": 65536 00:13:57.256 } 00:13:57.256 ] 00:13:57.256 }' 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.256 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.256 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:57.256 Zero copy mechanism will not be used. 00:13:57.256 Running I/O for 60 seconds... 00:13:57.256 [2024-10-25 17:55:15.647704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:57.821 17:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.821 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.821 17:55:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.821 [2024-10-25 17:55:15.993131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.821 17:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.821 17:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:57.821 [2024-10-25 17:55:16.064271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:57.821 [2024-10-25 17:55:16.066724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.821 [2024-10-25 17:55:16.178005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.821 [2024-10-25 17:55:16.178771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.078 [2024-10-25 17:55:16.301092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.078 [2024-10-25 17:55:16.301578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.337 [2024-10-25 17:55:16.570247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.337 [2024-10-25 17:55:16.571927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.596 120.00 IOPS, 360.00 MiB/s [2024-10-25T17:55:17.032Z] [2024-10-25 17:55:16.780989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.596 [2024-10-25 17:55:16.781463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.856 "name": "raid_bdev1", 00:13:58.856 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:13:58.856 "strip_size_kb": 0, 00:13:58.856 "state": "online", 00:13:58.856 "raid_level": "raid1", 00:13:58.856 "superblock": false, 00:13:58.856 "num_base_bdevs": 4, 00:13:58.856 "num_base_bdevs_discovered": 4, 00:13:58.856 "num_base_bdevs_operational": 4, 00:13:58.856 "process": { 00:13:58.856 "type": "rebuild", 00:13:58.856 "target": "spare", 00:13:58.856 "progress": { 00:13:58.856 "blocks": 12288, 00:13:58.856 "percent": 18 00:13:58.856 } 00:13:58.856 }, 00:13:58.856 "base_bdevs_list": [ 00:13:58.856 { 00:13:58.856 "name": "spare", 00:13:58.856 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:13:58.856 "is_configured": true, 00:13:58.856 "data_offset": 0, 00:13:58.856 "data_size": 65536 00:13:58.856 }, 00:13:58.856 { 00:13:58.856 "name": "BaseBdev2", 00:13:58.856 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:13:58.856 "is_configured": true, 00:13:58.856 "data_offset": 0, 00:13:58.856 "data_size": 65536 00:13:58.856 }, 00:13:58.856 { 00:13:58.856 "name": "BaseBdev3", 00:13:58.856 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:13:58.856 "is_configured": true, 00:13:58.856 "data_offset": 0, 00:13:58.856 "data_size": 65536 00:13:58.856 }, 00:13:58.856 { 00:13:58.856 "name": "BaseBdev4", 00:13:58.856 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:13:58.856 "is_configured": true, 00:13:58.856 "data_offset": 0, 00:13:58.856 "data_size": 65536 00:13:58.856 } 00:13:58.856 ] 00:13:58.856 }' 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.856 [2024-10-25 17:55:17.105067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.856 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.856 [2024-10-25 17:55:17.204283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.856 [2024-10-25 17:55:17.264196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.856 [2024-10-25 17:55:17.268944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.856 [2024-10-25 17:55:17.269036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.856 [2024-10-25 17:55:17.269054] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:59.115 [2024-10-25 17:55:17.314737] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.115 "name": "raid_bdev1", 00:13:59.115 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:13:59.115 "strip_size_kb": 0, 00:13:59.115 "state": "online", 00:13:59.115 "raid_level": "raid1", 00:13:59.115 "superblock": false, 00:13:59.115 "num_base_bdevs": 4, 00:13:59.115 "num_base_bdevs_discovered": 3, 00:13:59.115 "num_base_bdevs_operational": 3, 00:13:59.115 "base_bdevs_list": [ 00:13:59.115 { 00:13:59.115 "name": null, 00:13:59.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.115 "is_configured": false, 00:13:59.115 "data_offset": 0, 00:13:59.115 "data_size": 65536 00:13:59.115 }, 00:13:59.115 { 00:13:59.115 "name": "BaseBdev2", 00:13:59.115 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:13:59.115 "is_configured": true, 00:13:59.115 "data_offset": 0, 00:13:59.115 "data_size": 65536 00:13:59.115 }, 00:13:59.115 { 00:13:59.115 "name": "BaseBdev3", 00:13:59.115 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:13:59.115 "is_configured": true, 00:13:59.115 "data_offset": 0, 00:13:59.115 "data_size": 65536 00:13:59.115 }, 00:13:59.115 { 00:13:59.115 "name": "BaseBdev4", 00:13:59.115 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:13:59.115 "is_configured": true, 00:13:59.115 "data_offset": 0, 00:13:59.115 "data_size": 65536 00:13:59.115 } 00:13:59.115 ] 00:13:59.115 }' 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.115 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.374 113.50 IOPS, 340.50 MiB/s [2024-10-25T17:55:17.810Z] 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.375 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.634 "name": "raid_bdev1", 00:13:59.634 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:13:59.634 "strip_size_kb": 0, 00:13:59.634 "state": "online", 00:13:59.634 "raid_level": "raid1", 00:13:59.634 "superblock": false, 00:13:59.634 "num_base_bdevs": 4, 00:13:59.634 "num_base_bdevs_discovered": 3, 00:13:59.634 "num_base_bdevs_operational": 3, 00:13:59.634 "base_bdevs_list": [ 00:13:59.634 { 00:13:59.634 "name": null, 00:13:59.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.634 "is_configured": false, 00:13:59.634 "data_offset": 0, 00:13:59.634 "data_size": 65536 00:13:59.634 }, 00:13:59.634 { 00:13:59.634 "name": "BaseBdev2", 00:13:59.634 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:13:59.634 "is_configured": true, 00:13:59.634 "data_offset": 0, 00:13:59.634 "data_size": 65536 00:13:59.634 }, 00:13:59.634 { 00:13:59.634 "name": "BaseBdev3", 00:13:59.634 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:13:59.634 "is_configured": true, 00:13:59.634 "data_offset": 0, 00:13:59.634 "data_size": 65536 00:13:59.634 }, 00:13:59.634 { 00:13:59.634 "name": "BaseBdev4", 00:13:59.634 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:13:59.634 "is_configured": true, 00:13:59.634 "data_offset": 0, 00:13:59.634 "data_size": 65536 00:13:59.634 } 00:13:59.634 ] 00:13:59.634 }' 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.634 17:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.634 [2024-10-25 17:55:17.945250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.634 17:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.634 17:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:59.634 [2024-10-25 17:55:18.056565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:59.634 [2024-10-25 17:55:18.059423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.892 [2024-10-25 17:55:18.172987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.892 [2024-10-25 17:55:18.174013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.149 [2024-10-25 17:55:18.392017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.149 [2024-10-25 17:55:18.392769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.406 127.00 IOPS, 381.00 MiB/s [2024-10-25T17:55:18.842Z] [2024-10-25 17:55:18.741232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:00.406 [2024-10-25 17:55:18.743911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:00.664 [2024-10-25 17:55:18.959262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:00.664 [2024-10-25 17:55:18.959950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.664 "name": "raid_bdev1", 00:14:00.664 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:00.664 "strip_size_kb": 0, 00:14:00.664 "state": "online", 00:14:00.664 "raid_level": "raid1", 00:14:00.664 "superblock": false, 00:14:00.664 "num_base_bdevs": 4, 00:14:00.664 "num_base_bdevs_discovered": 4, 00:14:00.664 "num_base_bdevs_operational": 4, 00:14:00.664 "process": { 00:14:00.664 "type": "rebuild", 00:14:00.664 "target": "spare", 00:14:00.664 "progress": { 00:14:00.664 "blocks": 10240, 00:14:00.664 "percent": 15 00:14:00.664 } 00:14:00.664 }, 00:14:00.664 "base_bdevs_list": [ 00:14:00.664 { 00:14:00.664 "name": "spare", 00:14:00.664 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:00.664 "is_configured": true, 00:14:00.664 "data_offset": 0, 00:14:00.664 "data_size": 65536 00:14:00.664 }, 00:14:00.664 { 00:14:00.664 "name": "BaseBdev2", 00:14:00.664 "uuid": "64da20f0-53f8-5ae4-84a6-860a9c209ca4", 00:14:00.664 "is_configured": true, 00:14:00.664 "data_offset": 0, 00:14:00.664 "data_size": 65536 00:14:00.664 }, 00:14:00.664 { 00:14:00.664 "name": "BaseBdev3", 00:14:00.664 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:00.664 "is_configured": true, 00:14:00.664 "data_offset": 0, 00:14:00.664 "data_size": 65536 00:14:00.664 }, 00:14:00.664 { 00:14:00.664 "name": "BaseBdev4", 00:14:00.664 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:00.664 "is_configured": true, 00:14:00.664 "data_offset": 0, 00:14:00.664 "data_size": 65536 00:14:00.664 } 00:14:00.664 ] 00:14:00.664 }' 00:14:00.664 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.922 [2024-10-25 17:55:19.177777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.922 [2024-10-25 17:55:19.303327] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:00.922 [2024-10-25 17:55:19.303541] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.922 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.179 "name": "raid_bdev1", 00:14:01.179 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:01.179 "strip_size_kb": 0, 00:14:01.179 "state": "online", 00:14:01.179 "raid_level": "raid1", 00:14:01.179 "superblock": false, 00:14:01.179 "num_base_bdevs": 4, 00:14:01.179 "num_base_bdevs_discovered": 3, 00:14:01.179 "num_base_bdevs_operational": 3, 00:14:01.179 "process": { 00:14:01.179 "type": "rebuild", 00:14:01.179 "target": "spare", 00:14:01.179 "progress": { 00:14:01.179 "blocks": 12288, 00:14:01.179 "percent": 18 00:14:01.179 } 00:14:01.179 }, 00:14:01.179 "base_bdevs_list": [ 00:14:01.179 { 00:14:01.179 "name": "spare", 00:14:01.179 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": null, 00:14:01.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.179 "is_configured": false, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": "BaseBdev3", 00:14:01.179 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": "BaseBdev4", 00:14:01.179 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 } 00:14:01.179 ] 00:14:01.179 }' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.179 [2024-10-25 17:55:19.441588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:01.179 [2024-10-25 17:55:19.443794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.179 "name": "raid_bdev1", 00:14:01.179 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:01.179 "strip_size_kb": 0, 00:14:01.179 "state": "online", 00:14:01.179 "raid_level": "raid1", 00:14:01.179 "superblock": false, 00:14:01.179 "num_base_bdevs": 4, 00:14:01.179 "num_base_bdevs_discovered": 3, 00:14:01.179 "num_base_bdevs_operational": 3, 00:14:01.179 "process": { 00:14:01.179 "type": "rebuild", 00:14:01.179 "target": "spare", 00:14:01.179 "progress": { 00:14:01.179 "blocks": 14336, 00:14:01.179 "percent": 21 00:14:01.179 } 00:14:01.179 }, 00:14:01.179 "base_bdevs_list": [ 00:14:01.179 { 00:14:01.179 "name": "spare", 00:14:01.179 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": null, 00:14:01.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.179 "is_configured": false, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": "BaseBdev3", 00:14:01.179 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 }, 00:14:01.179 { 00:14:01.179 "name": "BaseBdev4", 00:14:01.179 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:01.179 "is_configured": true, 00:14:01.179 "data_offset": 0, 00:14:01.179 "data_size": 65536 00:14:01.179 } 00:14:01.179 ] 00:14:01.179 }' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.179 17:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.436 107.25 IOPS, 321.75 MiB/s [2024-10-25T17:55:19.872Z] [2024-10-25 17:55:19.674911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.436 [2024-10-25 17:55:19.675938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.694 [2024-10-25 17:55:20.034155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:01.694 [2024-10-25 17:55:20.036104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:01.952 [2024-10-25 17:55:20.251334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.210 17:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.210 94.80 IOPS, 284.40 MiB/s [2024-10-25T17:55:20.646Z] 17:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.468 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.468 "name": "raid_bdev1", 00:14:02.468 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:02.468 "strip_size_kb": 0, 00:14:02.468 "state": "online", 00:14:02.468 "raid_level": "raid1", 00:14:02.468 "superblock": false, 00:14:02.468 "num_base_bdevs": 4, 00:14:02.468 "num_base_bdevs_discovered": 3, 00:14:02.468 "num_base_bdevs_operational": 3, 00:14:02.468 "process": { 00:14:02.468 "type": "rebuild", 00:14:02.468 "target": "spare", 00:14:02.468 "progress": { 00:14:02.468 "blocks": 26624, 00:14:02.468 "percent": 40 00:14:02.468 } 00:14:02.468 }, 00:14:02.468 "base_bdevs_list": [ 00:14:02.468 { 00:14:02.468 "name": "spare", 00:14:02.468 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:02.468 "is_configured": true, 00:14:02.468 "data_offset": 0, 00:14:02.468 "data_size": 65536 00:14:02.468 }, 00:14:02.468 { 00:14:02.468 "name": null, 00:14:02.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.468 "is_configured": false, 00:14:02.468 "data_offset": 0, 00:14:02.468 "data_size": 65536 00:14:02.468 }, 00:14:02.468 { 00:14:02.468 "name": "BaseBdev3", 00:14:02.468 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:02.468 "is_configured": true, 00:14:02.468 "data_offset": 0, 00:14:02.468 "data_size": 65536 00:14:02.468 }, 00:14:02.468 { 00:14:02.468 "name": "BaseBdev4", 00:14:02.469 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:02.469 "is_configured": true, 00:14:02.469 "data_offset": 0, 00:14:02.469 "data_size": 65536 00:14:02.469 } 00:14:02.469 ] 00:14:02.469 }' 00:14:02.469 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.469 [2024-10-25 17:55:20.679700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:02.469 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.469 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.469 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.469 17:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.727 [2024-10-25 17:55:20.923098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:02.987 [2024-10-25 17:55:21.337938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:03.247 [2024-10-25 17:55:21.576789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:03.508 86.67 IOPS, 260.00 MiB/s [2024-10-25T17:55:21.944Z] 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.508 "name": "raid_bdev1", 00:14:03.508 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:03.508 "strip_size_kb": 0, 00:14:03.508 "state": "online", 00:14:03.508 "raid_level": "raid1", 00:14:03.508 "superblock": false, 00:14:03.508 "num_base_bdevs": 4, 00:14:03.508 "num_base_bdevs_discovered": 3, 00:14:03.508 "num_base_bdevs_operational": 3, 00:14:03.508 "process": { 00:14:03.508 "type": "rebuild", 00:14:03.508 "target": "spare", 00:14:03.508 "progress": { 00:14:03.508 "blocks": 47104, 00:14:03.508 "percent": 71 00:14:03.508 } 00:14:03.508 }, 00:14:03.508 "base_bdevs_list": [ 00:14:03.508 { 00:14:03.508 "name": "spare", 00:14:03.508 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:03.508 "is_configured": true, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 65536 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": null, 00:14:03.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.508 "is_configured": false, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 65536 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": "BaseBdev3", 00:14:03.508 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:03.508 "is_configured": true, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 65536 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": "BaseBdev4", 00:14:03.508 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:03.508 "is_configured": true, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 65536 00:14:03.508 } 00:14:03.508 ] 00:14:03.508 }' 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.508 17:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.508 [2024-10-25 17:55:21.940910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:03.768 [2024-10-25 17:55:22.051387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:04.029 [2024-10-25 17:55:22.287508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:04.302 [2024-10-25 17:55:22.517205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:04.562 79.43 IOPS, 238.29 MiB/s [2024-10-25T17:55:22.998Z] [2024-10-25 17:55:22.871344] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.562 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.562 "name": "raid_bdev1", 00:14:04.562 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:04.562 "strip_size_kb": 0, 00:14:04.562 "state": "online", 00:14:04.562 "raid_level": "raid1", 00:14:04.562 "superblock": false, 00:14:04.562 "num_base_bdevs": 4, 00:14:04.562 "num_base_bdevs_discovered": 3, 00:14:04.562 "num_base_bdevs_operational": 3, 00:14:04.562 "process": { 00:14:04.562 "type": "rebuild", 00:14:04.562 "target": "spare", 00:14:04.562 "progress": { 00:14:04.562 "blocks": 65536, 00:14:04.562 "percent": 100 00:14:04.562 } 00:14:04.562 }, 00:14:04.562 "base_bdevs_list": [ 00:14:04.562 { 00:14:04.562 "name": "spare", 00:14:04.563 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:04.563 "is_configured": true, 00:14:04.563 "data_offset": 0, 00:14:04.563 "data_size": 65536 00:14:04.563 }, 00:14:04.563 { 00:14:04.563 "name": null, 00:14:04.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.563 "is_configured": false, 00:14:04.563 "data_offset": 0, 00:14:04.563 "data_size": 65536 00:14:04.563 }, 00:14:04.563 { 00:14:04.563 "name": "BaseBdev3", 00:14:04.563 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:04.563 "is_configured": true, 00:14:04.563 "data_offset": 0, 00:14:04.563 "data_size": 65536 00:14:04.563 }, 00:14:04.563 { 00:14:04.563 "name": "BaseBdev4", 00:14:04.563 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:04.563 "is_configured": true, 00:14:04.563 "data_offset": 0, 00:14:04.563 "data_size": 65536 00:14:04.563 } 00:14:04.563 ] 00:14:04.563 }' 00:14:04.563 17:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.563 [2024-10-25 17:55:22.971176] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:04.563 [2024-10-25 17:55:22.983420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.822 17:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.822 17:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.822 17:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.822 17:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.649 73.88 IOPS, 221.62 MiB/s [2024-10-25T17:55:24.085Z] 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.649 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.908 "name": "raid_bdev1", 00:14:05.908 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:05.908 "strip_size_kb": 0, 00:14:05.908 "state": "online", 00:14:05.908 "raid_level": "raid1", 00:14:05.908 "superblock": false, 00:14:05.908 "num_base_bdevs": 4, 00:14:05.908 "num_base_bdevs_discovered": 3, 00:14:05.908 "num_base_bdevs_operational": 3, 00:14:05.908 "base_bdevs_list": [ 00:14:05.908 { 00:14:05.908 "name": "spare", 00:14:05.908 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": null, 00:14:05.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.908 "is_configured": false, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": "BaseBdev3", 00:14:05.908 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": "BaseBdev4", 00:14:05.908 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 } 00:14:05.908 ] 00:14:05.908 }' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.908 "name": "raid_bdev1", 00:14:05.908 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:05.908 "strip_size_kb": 0, 00:14:05.908 "state": "online", 00:14:05.908 "raid_level": "raid1", 00:14:05.908 "superblock": false, 00:14:05.908 "num_base_bdevs": 4, 00:14:05.908 "num_base_bdevs_discovered": 3, 00:14:05.908 "num_base_bdevs_operational": 3, 00:14:05.908 "base_bdevs_list": [ 00:14:05.908 { 00:14:05.908 "name": "spare", 00:14:05.908 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": null, 00:14:05.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.908 "is_configured": false, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": "BaseBdev3", 00:14:05.908 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 }, 00:14:05.908 { 00:14:05.908 "name": "BaseBdev4", 00:14:05.908 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:05.908 "is_configured": true, 00:14:05.908 "data_offset": 0, 00:14:05.908 "data_size": 65536 00:14:05.908 } 00:14:05.908 ] 00:14:05.908 }' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.908 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.909 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.168 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.168 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.168 "name": "raid_bdev1", 00:14:06.168 "uuid": "ec2272cd-99a4-455d-bd62-36f48ba95efe", 00:14:06.168 "strip_size_kb": 0, 00:14:06.168 "state": "online", 00:14:06.168 "raid_level": "raid1", 00:14:06.168 "superblock": false, 00:14:06.168 "num_base_bdevs": 4, 00:14:06.168 "num_base_bdevs_discovered": 3, 00:14:06.168 "num_base_bdevs_operational": 3, 00:14:06.168 "base_bdevs_list": [ 00:14:06.168 { 00:14:06.168 "name": "spare", 00:14:06.168 "uuid": "678be824-c252-5f21-a5f7-b44a21d38110", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 0, 00:14:06.168 "data_size": 65536 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": null, 00:14:06.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.168 "is_configured": false, 00:14:06.168 "data_offset": 0, 00:14:06.168 "data_size": 65536 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": "BaseBdev3", 00:14:06.168 "uuid": "9ef7ce60-c517-5d9b-9668-2a086fa6c582", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 0, 00:14:06.168 "data_size": 65536 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": "BaseBdev4", 00:14:06.168 "uuid": "08332ccb-791d-5250-83ae-996e00301806", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 0, 00:14:06.168 "data_size": 65536 00:14:06.168 } 00:14:06.168 ] 00:14:06.168 }' 00:14:06.168 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.168 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 69.78 IOPS, 209.33 MiB/s [2024-10-25T17:55:24.863Z] 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.427 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.427 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 [2024-10-25 17:55:24.799780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.427 [2024-10-25 17:55:24.799941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.686 00:14:06.686 Latency(us) 00:14:06.686 [2024-10-25T17:55:25.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.686 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:06.686 raid_bdev1 : 9.24 69.35 208.05 0.00 0.00 20047.26 343.42 115389.15 00:14:06.686 [2024-10-25T17:55:25.122Z] =================================================================================================================== 00:14:06.686 [2024-10-25T17:55:25.122Z] Total : 69.35 208.05 0.00 0.00 20047.26 343.42 115389.15 00:14:06.686 [2024-10-25 17:55:24.901309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.686 [2024-10-25 17:55:24.901498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.686 [2024-10-25 17:55:24.901648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.686 [2024-10-25 17:55:24.901663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.686 { 00:14:06.686 "results": [ 00:14:06.686 { 00:14:06.686 "job": "raid_bdev1", 00:14:06.686 "core_mask": "0x1", 00:14:06.686 "workload": "randrw", 00:14:06.686 "percentage": 50, 00:14:06.686 "status": "finished", 00:14:06.686 "queue_depth": 2, 00:14:06.686 "io_size": 3145728, 00:14:06.686 "runtime": 9.24315, 00:14:06.686 "iops": 69.34865278611728, 00:14:06.686 "mibps": 208.04595835835187, 00:14:06.686 "io_failed": 0, 00:14:06.686 "io_timeout": 0, 00:14:06.686 "avg_latency_us": 20047.262004646123, 00:14:06.686 "min_latency_us": 343.42008733624453, 00:14:06.686 "max_latency_us": 115389.14934497817 00:14:06.686 } 00:14:06.686 ], 00:14:06.686 "core_count": 1 00:14:06.686 } 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.686 17:55:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:06.946 /dev/nbd0 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.946 1+0 records in 00:14:06.946 1+0 records out 00:14:06.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452374 s, 9.1 MB/s 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.946 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:07.205 /dev/nbd1 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:07.205 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.206 1+0 records in 00:14:07.206 1+0 records out 00:14:07.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651186 s, 6.3 MB/s 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.206 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.465 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.724 17:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.724 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:07.985 /dev/nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.985 1+0 records in 00:14:07.985 1+0 records out 00:14:07.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491865 s, 8.3 MB/s 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.985 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.245 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78627 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78627 ']' 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78627 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78627 00:14:08.505 killing process with pid 78627 00:14:08.505 Received shutdown signal, test time was about 11.268797 seconds 00:14:08.505 00:14:08.505 Latency(us) 00:14:08.505 [2024-10-25T17:55:26.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.505 [2024-10-25T17:55:26.941Z] =================================================================================================================== 00:14:08.505 [2024-10-25T17:55:26.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78627' 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78627 00:14:08.505 17:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78627 00:14:08.505 [2024-10-25 17:55:26.897324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.074 [2024-10-25 17:55:27.431388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.464 ************************************ 00:14:10.464 END TEST raid_rebuild_test_io 00:14:10.464 ************************************ 00:14:10.464 17:55:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.464 00:14:10.464 real 0m15.295s 00:14:10.464 user 0m19.233s 00:14:10.464 sys 0m1.861s 00:14:10.464 17:55:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.464 17:55:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.724 17:55:28 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:10.724 17:55:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:10.724 17:55:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.724 17:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.724 ************************************ 00:14:10.724 START TEST raid_rebuild_test_sb_io 00:14:10.724 ************************************ 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79066 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79066 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79066 ']' 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.724 17:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.724 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.724 Zero copy mechanism will not be used. 00:14:10.724 [2024-10-25 17:55:29.058199] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:10.724 [2024-10-25 17:55:29.058330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79066 ] 00:14:10.984 [2024-10-25 17:55:29.237050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.984 [2024-10-25 17:55:29.399051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.243 [2024-10-25 17:55:29.667023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.244 [2024-10-25 17:55:29.667125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 BaseBdev1_malloc 00:14:11.814 17:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 [2024-10-25 17:55:30.006337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.814 [2024-10-25 17:55:30.006450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.814 [2024-10-25 17:55:30.006488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.814 [2024-10-25 17:55:30.006504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.814 [2024-10-25 17:55:30.009776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.814 [2024-10-25 17:55:30.009868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.814 BaseBdev1 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 BaseBdev2_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 [2024-10-25 17:55:30.077095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.814 [2024-10-25 17:55:30.077206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.814 [2024-10-25 17:55:30.077237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.814 [2024-10-25 17:55:30.077258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.814 [2024-10-25 17:55:30.080228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.814 [2024-10-25 17:55:30.080294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.814 BaseBdev2 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 BaseBdev3_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 [2024-10-25 17:55:30.160457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.814 [2024-10-25 17:55:30.160581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.814 [2024-10-25 17:55:30.160617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.814 [2024-10-25 17:55:30.160633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.814 [2024-10-25 17:55:30.163701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.814 [2024-10-25 17:55:30.163768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.814 BaseBdev3 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 BaseBdev4_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.814 [2024-10-25 17:55:30.230280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:11.814 [2024-10-25 17:55:30.230372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.814 [2024-10-25 17:55:30.230402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:11.814 [2024-10-25 17:55:30.230418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.814 [2024-10-25 17:55:30.233467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.814 [2024-10-25 17:55:30.233524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:11.814 BaseBdev4 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.814 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.074 spare_malloc 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.074 spare_delay 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.074 [2024-10-25 17:55:30.316739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.074 [2024-10-25 17:55:30.316867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.074 [2024-10-25 17:55:30.316905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.074 [2024-10-25 17:55:30.316921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.074 [2024-10-25 17:55:30.320105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.074 [2024-10-25 17:55:30.320172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.074 spare 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.074 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.075 [2024-10-25 17:55:30.328805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.075 [2024-10-25 17:55:30.331468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.075 [2024-10-25 17:55:30.331573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.075 [2024-10-25 17:55:30.331645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.075 [2024-10-25 17:55:30.331927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.075 [2024-10-25 17:55:30.331958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.075 [2024-10-25 17:55:30.332367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.075 [2024-10-25 17:55:30.332672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.075 [2024-10-25 17:55:30.332695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.075 [2024-10-25 17:55:30.333039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.075 "name": "raid_bdev1", 00:14:12.075 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:12.075 "strip_size_kb": 0, 00:14:12.075 "state": "online", 00:14:12.075 "raid_level": "raid1", 00:14:12.075 "superblock": true, 00:14:12.075 "num_base_bdevs": 4, 00:14:12.075 "num_base_bdevs_discovered": 4, 00:14:12.075 "num_base_bdevs_operational": 4, 00:14:12.075 "base_bdevs_list": [ 00:14:12.075 { 00:14:12.075 "name": "BaseBdev1", 00:14:12.075 "uuid": "ad929d39-b5aa-5c4c-b206-212b7140e383", 00:14:12.075 "is_configured": true, 00:14:12.075 "data_offset": 2048, 00:14:12.075 "data_size": 63488 00:14:12.075 }, 00:14:12.075 { 00:14:12.075 "name": "BaseBdev2", 00:14:12.075 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:12.075 "is_configured": true, 00:14:12.075 "data_offset": 2048, 00:14:12.075 "data_size": 63488 00:14:12.075 }, 00:14:12.075 { 00:14:12.075 "name": "BaseBdev3", 00:14:12.075 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:12.075 "is_configured": true, 00:14:12.075 "data_offset": 2048, 00:14:12.075 "data_size": 63488 00:14:12.075 }, 00:14:12.075 { 00:14:12.075 "name": "BaseBdev4", 00:14:12.075 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:12.075 "is_configured": true, 00:14:12.075 "data_offset": 2048, 00:14:12.075 "data_size": 63488 00:14:12.075 } 00:14:12.075 ] 00:14:12.075 }' 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.075 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 [2024-10-25 17:55:30.825091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 [2024-10-25 17:55:30.928611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.643 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.643 "name": "raid_bdev1", 00:14:12.644 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:12.644 "strip_size_kb": 0, 00:14:12.644 "state": "online", 00:14:12.644 "raid_level": "raid1", 00:14:12.644 "superblock": true, 00:14:12.644 "num_base_bdevs": 4, 00:14:12.644 "num_base_bdevs_discovered": 3, 00:14:12.644 "num_base_bdevs_operational": 3, 00:14:12.644 "base_bdevs_list": [ 00:14:12.644 { 00:14:12.644 "name": null, 00:14:12.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.644 "is_configured": false, 00:14:12.644 "data_offset": 0, 00:14:12.644 "data_size": 63488 00:14:12.644 }, 00:14:12.644 { 00:14:12.644 "name": "BaseBdev2", 00:14:12.644 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:12.644 "is_configured": true, 00:14:12.644 "data_offset": 2048, 00:14:12.644 "data_size": 63488 00:14:12.644 }, 00:14:12.644 { 00:14:12.644 "name": "BaseBdev3", 00:14:12.644 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:12.644 "is_configured": true, 00:14:12.644 "data_offset": 2048, 00:14:12.644 "data_size": 63488 00:14:12.644 }, 00:14:12.644 { 00:14:12.644 "name": "BaseBdev4", 00:14:12.644 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:12.644 "is_configured": true, 00:14:12.644 "data_offset": 2048, 00:14:12.644 "data_size": 63488 00:14:12.644 } 00:14:12.644 ] 00:14:12.644 }' 00:14:12.644 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.644 17:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.644 [2024-10-25 17:55:31.071857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:12.644 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.644 Zero copy mechanism will not be used. 00:14:12.644 Running I/O for 60 seconds... 00:14:13.211 17:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.211 17:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.211 17:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.211 [2024-10-25 17:55:31.388102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.211 17:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.211 17:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.211 [2024-10-25 17:55:31.470592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:13.211 [2024-10-25 17:55:31.473319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.211 [2024-10-25 17:55:31.639245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.469 [2024-10-25 17:55:31.807021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.469 [2024-10-25 17:55:31.807606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.986 129.00 IOPS, 387.00 MiB/s [2024-10-25T17:55:32.422Z] [2024-10-25 17:55:32.175205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.986 [2024-10-25 17:55:32.334424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.245 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.246 "name": "raid_bdev1", 00:14:14.246 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:14.246 "strip_size_kb": 0, 00:14:14.246 "state": "online", 00:14:14.246 "raid_level": "raid1", 00:14:14.246 "superblock": true, 00:14:14.246 "num_base_bdevs": 4, 00:14:14.246 "num_base_bdevs_discovered": 4, 00:14:14.246 "num_base_bdevs_operational": 4, 00:14:14.246 "process": { 00:14:14.246 "type": "rebuild", 00:14:14.246 "target": "spare", 00:14:14.246 "progress": { 00:14:14.246 "blocks": 12288, 00:14:14.246 "percent": 19 00:14:14.246 } 00:14:14.246 }, 00:14:14.246 "base_bdevs_list": [ 00:14:14.246 { 00:14:14.246 "name": "spare", 00:14:14.246 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:14.246 "is_configured": true, 00:14:14.246 "data_offset": 2048, 00:14:14.246 "data_size": 63488 00:14:14.246 }, 00:14:14.246 { 00:14:14.246 "name": "BaseBdev2", 00:14:14.246 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:14.246 "is_configured": true, 00:14:14.246 "data_offset": 2048, 00:14:14.246 "data_size": 63488 00:14:14.246 }, 00:14:14.246 { 00:14:14.246 "name": "BaseBdev3", 00:14:14.246 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:14.246 "is_configured": true, 00:14:14.246 "data_offset": 2048, 00:14:14.246 "data_size": 63488 00:14:14.246 }, 00:14:14.246 { 00:14:14.246 "name": "BaseBdev4", 00:14:14.246 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:14.246 "is_configured": true, 00:14:14.246 "data_offset": 2048, 00:14:14.246 "data_size": 63488 00:14:14.246 } 00:14:14.246 ] 00:14:14.246 }' 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.246 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.246 [2024-10-25 17:55:32.613458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.505 [2024-10-25 17:55:32.692673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:14.505 [2024-10-25 17:55:32.805287] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.505 [2024-10-25 17:55:32.814992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.505 [2024-10-25 17:55:32.815089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.505 [2024-10-25 17:55:32.815114] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.505 [2024-10-25 17:55:32.874339] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.505 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.765 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.765 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.765 "name": "raid_bdev1", 00:14:14.765 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:14.765 "strip_size_kb": 0, 00:14:14.765 "state": "online", 00:14:14.766 "raid_level": "raid1", 00:14:14.766 "superblock": true, 00:14:14.766 "num_base_bdevs": 4, 00:14:14.766 "num_base_bdevs_discovered": 3, 00:14:14.766 "num_base_bdevs_operational": 3, 00:14:14.766 "base_bdevs_list": [ 00:14:14.766 { 00:14:14.766 "name": null, 00:14:14.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.766 "is_configured": false, 00:14:14.766 "data_offset": 0, 00:14:14.766 "data_size": 63488 00:14:14.766 }, 00:14:14.766 { 00:14:14.766 "name": "BaseBdev2", 00:14:14.766 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:14.766 "is_configured": true, 00:14:14.766 "data_offset": 2048, 00:14:14.766 "data_size": 63488 00:14:14.766 }, 00:14:14.766 { 00:14:14.766 "name": "BaseBdev3", 00:14:14.766 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:14.766 "is_configured": true, 00:14:14.766 "data_offset": 2048, 00:14:14.766 "data_size": 63488 00:14:14.766 }, 00:14:14.766 { 00:14:14.766 "name": "BaseBdev4", 00:14:14.766 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:14.766 "is_configured": true, 00:14:14.766 "data_offset": 2048, 00:14:14.766 "data_size": 63488 00:14:14.766 } 00:14:14.766 ] 00:14:14.766 }' 00:14:14.766 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.766 17:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.027 98.50 IOPS, 295.50 MiB/s [2024-10-25T17:55:33.463Z] 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.027 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.027 "name": "raid_bdev1", 00:14:15.027 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:15.027 "strip_size_kb": 0, 00:14:15.027 "state": "online", 00:14:15.027 "raid_level": "raid1", 00:14:15.027 "superblock": true, 00:14:15.027 "num_base_bdevs": 4, 00:14:15.027 "num_base_bdevs_discovered": 3, 00:14:15.027 "num_base_bdevs_operational": 3, 00:14:15.027 "base_bdevs_list": [ 00:14:15.027 { 00:14:15.027 "name": null, 00:14:15.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.027 "is_configured": false, 00:14:15.027 "data_offset": 0, 00:14:15.027 "data_size": 63488 00:14:15.027 }, 00:14:15.027 { 00:14:15.027 "name": "BaseBdev2", 00:14:15.027 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:15.027 "is_configured": true, 00:14:15.027 "data_offset": 2048, 00:14:15.027 "data_size": 63488 00:14:15.027 }, 00:14:15.027 { 00:14:15.027 "name": "BaseBdev3", 00:14:15.027 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:15.027 "is_configured": true, 00:14:15.027 "data_offset": 2048, 00:14:15.027 "data_size": 63488 00:14:15.027 }, 00:14:15.027 { 00:14:15.027 "name": "BaseBdev4", 00:14:15.027 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:15.027 "is_configured": true, 00:14:15.027 "data_offset": 2048, 00:14:15.027 "data_size": 63488 00:14:15.027 } 00:14:15.027 ] 00:14:15.027 }' 00:14:15.028 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.028 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.028 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.291 [2024-10-25 17:55:33.473733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.291 17:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.291 [2024-10-25 17:55:33.552292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:15.291 [2024-10-25 17:55:33.555018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.291 [2024-10-25 17:55:33.714022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.291 [2024-10-25 17:55:33.715011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.552 [2024-10-25 17:55:33.828413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.552 [2024-10-25 17:55:33.828993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.811 110.00 IOPS, 330.00 MiB/s [2024-10-25T17:55:34.247Z] [2024-10-25 17:55:34.157860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.072 [2024-10-25 17:55:34.391042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.072 [2024-10-25 17:55:34.392319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.332 "name": "raid_bdev1", 00:14:16.332 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:16.332 "strip_size_kb": 0, 00:14:16.332 "state": "online", 00:14:16.332 "raid_level": "raid1", 00:14:16.332 "superblock": true, 00:14:16.332 "num_base_bdevs": 4, 00:14:16.332 "num_base_bdevs_discovered": 4, 00:14:16.332 "num_base_bdevs_operational": 4, 00:14:16.332 "process": { 00:14:16.332 "type": "rebuild", 00:14:16.332 "target": "spare", 00:14:16.332 "progress": { 00:14:16.332 "blocks": 10240, 00:14:16.332 "percent": 16 00:14:16.332 } 00:14:16.332 }, 00:14:16.332 "base_bdevs_list": [ 00:14:16.332 { 00:14:16.332 "name": "spare", 00:14:16.332 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:16.332 "is_configured": true, 00:14:16.332 "data_offset": 2048, 00:14:16.332 "data_size": 63488 00:14:16.332 }, 00:14:16.332 { 00:14:16.332 "name": "BaseBdev2", 00:14:16.332 "uuid": "d32a93d9-ef11-535b-b145-6b9433777dca", 00:14:16.332 "is_configured": true, 00:14:16.332 "data_offset": 2048, 00:14:16.332 "data_size": 63488 00:14:16.332 }, 00:14:16.332 { 00:14:16.332 "name": "BaseBdev3", 00:14:16.332 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:16.332 "is_configured": true, 00:14:16.332 "data_offset": 2048, 00:14:16.332 "data_size": 63488 00:14:16.332 }, 00:14:16.332 { 00:14:16.332 "name": "BaseBdev4", 00:14:16.332 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:16.332 "is_configured": true, 00:14:16.332 "data_offset": 2048, 00:14:16.332 "data_size": 63488 00:14:16.332 } 00:14:16.332 ] 00:14:16.332 }' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:16.332 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.332 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.332 [2024-10-25 17:55:34.656347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.593 [2024-10-25 17:55:34.952706] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:16.593 [2024-10-25 17:55:34.952795] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.593 17:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.593 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.593 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.593 "name": "raid_bdev1", 00:14:16.593 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:16.593 "strip_size_kb": 0, 00:14:16.593 "state": "online", 00:14:16.593 "raid_level": "raid1", 00:14:16.593 "superblock": true, 00:14:16.593 "num_base_bdevs": 4, 00:14:16.593 "num_base_bdevs_discovered": 3, 00:14:16.593 "num_base_bdevs_operational": 3, 00:14:16.593 "process": { 00:14:16.593 "type": "rebuild", 00:14:16.593 "target": "spare", 00:14:16.593 "progress": { 00:14:16.593 "blocks": 14336, 00:14:16.593 "percent": 22 00:14:16.593 } 00:14:16.593 }, 00:14:16.593 "base_bdevs_list": [ 00:14:16.593 { 00:14:16.593 "name": "spare", 00:14:16.593 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:16.593 "is_configured": true, 00:14:16.593 "data_offset": 2048, 00:14:16.593 "data_size": 63488 00:14:16.593 }, 00:14:16.593 { 00:14:16.593 "name": null, 00:14:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.593 "is_configured": false, 00:14:16.593 "data_offset": 0, 00:14:16.593 "data_size": 63488 00:14:16.593 }, 00:14:16.593 { 00:14:16.593 "name": "BaseBdev3", 00:14:16.593 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:16.593 "is_configured": true, 00:14:16.593 "data_offset": 2048, 00:14:16.593 "data_size": 63488 00:14:16.593 }, 00:14:16.593 { 00:14:16.593 "name": "BaseBdev4", 00:14:16.593 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:16.593 "is_configured": true, 00:14:16.593 "data_offset": 2048, 00:14:16.593 "data_size": 63488 00:14:16.593 } 00:14:16.593 ] 00:14:16.593 }' 00:14:16.593 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.853 99.00 IOPS, 297.00 MiB/s [2024-10-25T17:55:35.289Z] 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=500 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.853 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.853 "name": "raid_bdev1", 00:14:16.853 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:16.853 "strip_size_kb": 0, 00:14:16.853 "state": "online", 00:14:16.853 "raid_level": "raid1", 00:14:16.853 "superblock": true, 00:14:16.853 "num_base_bdevs": 4, 00:14:16.854 "num_base_bdevs_discovered": 3, 00:14:16.854 "num_base_bdevs_operational": 3, 00:14:16.854 "process": { 00:14:16.854 "type": "rebuild", 00:14:16.854 "target": "spare", 00:14:16.854 "progress": { 00:14:16.854 "blocks": 16384, 00:14:16.854 "percent": 25 00:14:16.854 } 00:14:16.854 }, 00:14:16.854 "base_bdevs_list": [ 00:14:16.854 { 00:14:16.854 "name": "spare", 00:14:16.854 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:16.854 "is_configured": true, 00:14:16.854 "data_offset": 2048, 00:14:16.854 "data_size": 63488 00:14:16.854 }, 00:14:16.854 { 00:14:16.854 "name": null, 00:14:16.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.854 "is_configured": false, 00:14:16.854 "data_offset": 0, 00:14:16.854 "data_size": 63488 00:14:16.854 }, 00:14:16.854 { 00:14:16.854 "name": "BaseBdev3", 00:14:16.854 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:16.854 "is_configured": true, 00:14:16.854 "data_offset": 2048, 00:14:16.854 "data_size": 63488 00:14:16.854 }, 00:14:16.854 { 00:14:16.854 "name": "BaseBdev4", 00:14:16.854 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:16.854 "is_configured": true, 00:14:16.854 "data_offset": 2048, 00:14:16.854 "data_size": 63488 00:14:16.854 } 00:14:16.854 ] 00:14:16.854 }' 00:14:16.854 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.854 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.854 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.854 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.854 17:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.113 [2024-10-25 17:55:35.352458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:17.378 [2024-10-25 17:55:35.589597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:17.651 [2024-10-25 17:55:35.819252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:17.651 [2024-10-25 17:55:35.957349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:17.651 [2024-10-25 17:55:35.957915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:17.920 90.40 IOPS, 271.20 MiB/s [2024-10-25T17:55:36.356Z] 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.920 [2024-10-25 17:55:36.292908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:17.920 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.920 "name": "raid_bdev1", 00:14:17.920 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:17.920 "strip_size_kb": 0, 00:14:17.920 "state": "online", 00:14:17.920 "raid_level": "raid1", 00:14:17.920 "superblock": true, 00:14:17.920 "num_base_bdevs": 4, 00:14:17.920 "num_base_bdevs_discovered": 3, 00:14:17.920 "num_base_bdevs_operational": 3, 00:14:17.920 "process": { 00:14:17.920 "type": "rebuild", 00:14:17.920 "target": "spare", 00:14:17.920 "progress": { 00:14:17.921 "blocks": 30720, 00:14:17.921 "percent": 48 00:14:17.921 } 00:14:17.921 }, 00:14:17.921 "base_bdevs_list": [ 00:14:17.921 { 00:14:17.921 "name": "spare", 00:14:17.921 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:17.921 "is_configured": true, 00:14:17.921 "data_offset": 2048, 00:14:17.921 "data_size": 63488 00:14:17.921 }, 00:14:17.921 { 00:14:17.921 "name": null, 00:14:17.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.921 "is_configured": false, 00:14:17.921 "data_offset": 0, 00:14:17.921 "data_size": 63488 00:14:17.921 }, 00:14:17.921 { 00:14:17.921 "name": "BaseBdev3", 00:14:17.921 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:17.921 "is_configured": true, 00:14:17.921 "data_offset": 2048, 00:14:17.921 "data_size": 63488 00:14:17.921 }, 00:14:17.921 { 00:14:17.921 "name": "BaseBdev4", 00:14:17.921 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:17.921 "is_configured": true, 00:14:17.921 "data_offset": 2048, 00:14:17.921 "data_size": 63488 00:14:17.921 } 00:14:17.921 ] 00:14:17.921 }' 00:14:17.921 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.921 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.921 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.179 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.179 17:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.179 [2024-10-25 17:55:36.499070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:18.438 [2024-10-25 17:55:36.748661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:18.957 81.83 IOPS, 245.50 MiB/s [2024-10-25T17:55:37.393Z] 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.957 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.957 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.957 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.957 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.957 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.216 [2024-10-25 17:55:37.411198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.216 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.216 "name": "raid_bdev1", 00:14:19.216 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:19.216 "strip_size_kb": 0, 00:14:19.216 "state": "online", 00:14:19.216 "raid_level": "raid1", 00:14:19.216 "superblock": true, 00:14:19.216 "num_base_bdevs": 4, 00:14:19.216 "num_base_bdevs_discovered": 3, 00:14:19.216 "num_base_bdevs_operational": 3, 00:14:19.216 "process": { 00:14:19.216 "type": "rebuild", 00:14:19.216 "target": "spare", 00:14:19.216 "progress": { 00:14:19.216 "blocks": 49152, 00:14:19.216 "percent": 77 00:14:19.216 } 00:14:19.216 }, 00:14:19.216 "base_bdevs_list": [ 00:14:19.216 { 00:14:19.216 "name": "spare", 00:14:19.216 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:19.216 "is_configured": true, 00:14:19.216 "data_offset": 2048, 00:14:19.216 "data_size": 63488 00:14:19.216 }, 00:14:19.216 { 00:14:19.216 "name": null, 00:14:19.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.216 "is_configured": false, 00:14:19.216 "data_offset": 0, 00:14:19.216 "data_size": 63488 00:14:19.216 }, 00:14:19.216 { 00:14:19.216 "name": "BaseBdev3", 00:14:19.216 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:19.216 "is_configured": true, 00:14:19.216 "data_offset": 2048, 00:14:19.216 "data_size": 63488 00:14:19.217 }, 00:14:19.217 { 00:14:19.217 "name": "BaseBdev4", 00:14:19.217 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:19.217 "is_configured": true, 00:14:19.217 "data_offset": 2048, 00:14:19.217 "data_size": 63488 00:14:19.217 } 00:14:19.217 ] 00:14:19.217 }' 00:14:19.217 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.217 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.217 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.217 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.217 17:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.217 [2024-10-25 17:55:37.627859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:19.784 [2024-10-25 17:55:37.972038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:19.784 74.14 IOPS, 222.43 MiB/s [2024-10-25T17:55:38.220Z] [2024-10-25 17:55:38.082461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:20.043 [2024-10-25 17:55:38.412488] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:20.302 [2024-10-25 17:55:38.517846] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:20.302 [2024-10-25 17:55:38.524799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.302 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.302 "name": "raid_bdev1", 00:14:20.302 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:20.302 "strip_size_kb": 0, 00:14:20.302 "state": "online", 00:14:20.302 "raid_level": "raid1", 00:14:20.302 "superblock": true, 00:14:20.302 "num_base_bdevs": 4, 00:14:20.302 "num_base_bdevs_discovered": 3, 00:14:20.302 "num_base_bdevs_operational": 3, 00:14:20.302 "base_bdevs_list": [ 00:14:20.302 { 00:14:20.302 "name": "spare", 00:14:20.302 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:20.302 "is_configured": true, 00:14:20.302 "data_offset": 2048, 00:14:20.302 "data_size": 63488 00:14:20.302 }, 00:14:20.302 { 00:14:20.302 "name": null, 00:14:20.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.302 "is_configured": false, 00:14:20.302 "data_offset": 0, 00:14:20.302 "data_size": 63488 00:14:20.302 }, 00:14:20.302 { 00:14:20.302 "name": "BaseBdev3", 00:14:20.302 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:20.302 "is_configured": true, 00:14:20.302 "data_offset": 2048, 00:14:20.302 "data_size": 63488 00:14:20.302 }, 00:14:20.302 { 00:14:20.302 "name": "BaseBdev4", 00:14:20.302 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:20.303 "is_configured": true, 00:14:20.303 "data_offset": 2048, 00:14:20.303 "data_size": 63488 00:14:20.303 } 00:14:20.303 ] 00:14:20.303 }' 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.303 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.563 "name": "raid_bdev1", 00:14:20.563 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:20.563 "strip_size_kb": 0, 00:14:20.563 "state": "online", 00:14:20.563 "raid_level": "raid1", 00:14:20.563 "superblock": true, 00:14:20.563 "num_base_bdevs": 4, 00:14:20.563 "num_base_bdevs_discovered": 3, 00:14:20.563 "num_base_bdevs_operational": 3, 00:14:20.563 "base_bdevs_list": [ 00:14:20.563 { 00:14:20.563 "name": "spare", 00:14:20.563 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": null, 00:14:20.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.563 "is_configured": false, 00:14:20.563 "data_offset": 0, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": "BaseBdev3", 00:14:20.563 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": "BaseBdev4", 00:14:20.563 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 } 00:14:20.563 ] 00:14:20.563 }' 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.563 "name": "raid_bdev1", 00:14:20.563 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:20.563 "strip_size_kb": 0, 00:14:20.563 "state": "online", 00:14:20.563 "raid_level": "raid1", 00:14:20.563 "superblock": true, 00:14:20.563 "num_base_bdevs": 4, 00:14:20.563 "num_base_bdevs_discovered": 3, 00:14:20.563 "num_base_bdevs_operational": 3, 00:14:20.563 "base_bdevs_list": [ 00:14:20.563 { 00:14:20.563 "name": "spare", 00:14:20.563 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": null, 00:14:20.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.563 "is_configured": false, 00:14:20.563 "data_offset": 0, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": "BaseBdev3", 00:14:20.563 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 }, 00:14:20.563 { 00:14:20.563 "name": "BaseBdev4", 00:14:20.563 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:20.563 "is_configured": true, 00:14:20.563 "data_offset": 2048, 00:14:20.563 "data_size": 63488 00:14:20.563 } 00:14:20.563 ] 00:14:20.563 }' 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.563 17:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.083 69.88 IOPS, 209.62 MiB/s [2024-10-25T17:55:39.519Z] 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.083 [2024-10-25 17:55:39.270123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.083 [2024-10-25 17:55:39.270175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.083 00:14:21.083 Latency(us) 00:14:21.083 [2024-10-25T17:55:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.083 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.083 raid_bdev1 : 8.24 68.61 205.83 0.00 0.00 18894.23 370.25 124547.02 00:14:21.083 [2024-10-25T17:55:39.519Z] =================================================================================================================== 00:14:21.083 [2024-10-25T17:55:39.519Z] Total : 68.61 205.83 0.00 0.00 18894.23 370.25 124547.02 00:14:21.083 [2024-10-25 17:55:39.317386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.083 [2024-10-25 17:55:39.317449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.083 [2024-10-25 17:55:39.317565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.083 [2024-10-25 17:55:39.317577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.083 { 00:14:21.083 "results": [ 00:14:21.083 { 00:14:21.083 "job": "raid_bdev1", 00:14:21.083 "core_mask": "0x1", 00:14:21.083 "workload": "randrw", 00:14:21.083 "percentage": 50, 00:14:21.083 "status": "finished", 00:14:21.083 "queue_depth": 2, 00:14:21.083 "io_size": 3145728, 00:14:21.083 "runtime": 8.235019, 00:14:21.083 "iops": 68.60943490233599, 00:14:21.083 "mibps": 205.828304707008, 00:14:21.083 "io_failed": 0, 00:14:21.083 "io_timeout": 0, 00:14:21.083 "avg_latency_us": 18894.22893843954, 00:14:21.083 "min_latency_us": 370.24978165938865, 00:14:21.083 "max_latency_us": 124547.01834061135 00:14:21.083 } 00:14:21.083 ], 00:14:21.083 "core_count": 1 00:14:21.083 } 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.083 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:21.343 /dev/nbd0 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.343 1+0 records in 00:14:21.343 1+0 records out 00:14:21.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413584 s, 9.9 MB/s 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:21.343 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.344 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:21.603 /dev/nbd1 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.603 1+0 records in 00:14:21.603 1+0 records out 00:14:21.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036198 s, 11.3 MB/s 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.603 17:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.604 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.862 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:22.120 /dev/nbd1 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.120 1+0 records in 00:14:22.120 1+0 records out 00:14:22.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252779 s, 16.2 MB/s 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.120 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.378 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.638 17:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.638 [2024-10-25 17:55:41.055320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.638 [2024-10-25 17:55:41.055387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.638 [2024-10-25 17:55:41.055410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:22.638 [2024-10-25 17:55:41.055429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.638 [2024-10-25 17:55:41.058003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.638 [2024-10-25 17:55:41.058041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.638 [2024-10-25 17:55:41.058144] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:22.638 [2024-10-25 17:55:41.058206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.638 [2024-10-25 17:55:41.058364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.638 [2024-10-25 17:55:41.058468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.638 spare 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.638 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 [2024-10-25 17:55:41.158386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:22.897 [2024-10-25 17:55:41.158430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.897 [2024-10-25 17:55:41.158805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:22.897 [2024-10-25 17:55:41.159036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:22.897 [2024-10-25 17:55:41.159055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:22.897 [2024-10-25 17:55:41.159278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.897 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.898 "name": "raid_bdev1", 00:14:22.898 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:22.898 "strip_size_kb": 0, 00:14:22.898 "state": "online", 00:14:22.898 "raid_level": "raid1", 00:14:22.898 "superblock": true, 00:14:22.898 "num_base_bdevs": 4, 00:14:22.898 "num_base_bdevs_discovered": 3, 00:14:22.898 "num_base_bdevs_operational": 3, 00:14:22.898 "base_bdevs_list": [ 00:14:22.898 { 00:14:22.898 "name": "spare", 00:14:22.898 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:22.898 "is_configured": true, 00:14:22.898 "data_offset": 2048, 00:14:22.898 "data_size": 63488 00:14:22.898 }, 00:14:22.898 { 00:14:22.898 "name": null, 00:14:22.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.898 "is_configured": false, 00:14:22.898 "data_offset": 2048, 00:14:22.898 "data_size": 63488 00:14:22.898 }, 00:14:22.898 { 00:14:22.898 "name": "BaseBdev3", 00:14:22.898 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:22.898 "is_configured": true, 00:14:22.898 "data_offset": 2048, 00:14:22.898 "data_size": 63488 00:14:22.898 }, 00:14:22.898 { 00:14:22.898 "name": "BaseBdev4", 00:14:22.898 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:22.898 "is_configured": true, 00:14:22.898 "data_offset": 2048, 00:14:22.898 "data_size": 63488 00:14:22.898 } 00:14:22.898 ] 00:14:22.898 }' 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.898 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.467 "name": "raid_bdev1", 00:14:23.467 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:23.467 "strip_size_kb": 0, 00:14:23.467 "state": "online", 00:14:23.467 "raid_level": "raid1", 00:14:23.467 "superblock": true, 00:14:23.467 "num_base_bdevs": 4, 00:14:23.467 "num_base_bdevs_discovered": 3, 00:14:23.467 "num_base_bdevs_operational": 3, 00:14:23.467 "base_bdevs_list": [ 00:14:23.467 { 00:14:23.467 "name": "spare", 00:14:23.467 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:23.467 "is_configured": true, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": null, 00:14:23.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.467 "is_configured": false, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": "BaseBdev3", 00:14:23.467 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:23.467 "is_configured": true, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": "BaseBdev4", 00:14:23.467 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:23.467 "is_configured": true, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 } 00:14:23.467 ] 00:14:23.467 }' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.467 [2024-10-25 17:55:41.830197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.467 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.467 "name": "raid_bdev1", 00:14:23.467 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:23.467 "strip_size_kb": 0, 00:14:23.467 "state": "online", 00:14:23.467 "raid_level": "raid1", 00:14:23.467 "superblock": true, 00:14:23.467 "num_base_bdevs": 4, 00:14:23.467 "num_base_bdevs_discovered": 2, 00:14:23.467 "num_base_bdevs_operational": 2, 00:14:23.467 "base_bdevs_list": [ 00:14:23.467 { 00:14:23.467 "name": null, 00:14:23.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.467 "is_configured": false, 00:14:23.467 "data_offset": 0, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": null, 00:14:23.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.467 "is_configured": false, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": "BaseBdev3", 00:14:23.467 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:23.467 "is_configured": true, 00:14:23.467 "data_offset": 2048, 00:14:23.467 "data_size": 63488 00:14:23.467 }, 00:14:23.467 { 00:14:23.467 "name": "BaseBdev4", 00:14:23.467 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:23.468 "is_configured": true, 00:14:23.468 "data_offset": 2048, 00:14:23.468 "data_size": 63488 00:14:23.468 } 00:14:23.468 ] 00:14:23.468 }' 00:14:23.468 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.468 17:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.037 17:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.037 17:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.037 17:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.037 [2024-10-25 17:55:42.281536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.037 [2024-10-25 17:55:42.281804] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:24.037 [2024-10-25 17:55:42.281821] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:24.037 [2024-10-25 17:55:42.281884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.037 [2024-10-25 17:55:42.297296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:24.037 17:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.037 17:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:24.037 [2024-10-25 17:55:42.299464] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.974 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.974 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.975 "name": "raid_bdev1", 00:14:24.975 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:24.975 "strip_size_kb": 0, 00:14:24.975 "state": "online", 00:14:24.975 "raid_level": "raid1", 00:14:24.975 "superblock": true, 00:14:24.975 "num_base_bdevs": 4, 00:14:24.975 "num_base_bdevs_discovered": 3, 00:14:24.975 "num_base_bdevs_operational": 3, 00:14:24.975 "process": { 00:14:24.975 "type": "rebuild", 00:14:24.975 "target": "spare", 00:14:24.975 "progress": { 00:14:24.975 "blocks": 20480, 00:14:24.975 "percent": 32 00:14:24.975 } 00:14:24.975 }, 00:14:24.975 "base_bdevs_list": [ 00:14:24.975 { 00:14:24.975 "name": "spare", 00:14:24.975 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:24.975 "is_configured": true, 00:14:24.975 "data_offset": 2048, 00:14:24.975 "data_size": 63488 00:14:24.975 }, 00:14:24.975 { 00:14:24.975 "name": null, 00:14:24.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.975 "is_configured": false, 00:14:24.975 "data_offset": 2048, 00:14:24.975 "data_size": 63488 00:14:24.975 }, 00:14:24.975 { 00:14:24.975 "name": "BaseBdev3", 00:14:24.975 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:24.975 "is_configured": true, 00:14:24.975 "data_offset": 2048, 00:14:24.975 "data_size": 63488 00:14:24.975 }, 00:14:24.975 { 00:14:24.975 "name": "BaseBdev4", 00:14:24.975 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:24.975 "is_configured": true, 00:14:24.975 "data_offset": 2048, 00:14:24.975 "data_size": 63488 00:14:24.975 } 00:14:24.975 ] 00:14:24.975 }' 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.975 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.234 [2024-10-25 17:55:43.459610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.234 [2024-10-25 17:55:43.509330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:25.234 [2024-10-25 17:55:43.509407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.234 [2024-10-25 17:55:43.509431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.234 [2024-10-25 17:55:43.509438] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.234 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.234 "name": "raid_bdev1", 00:14:25.234 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:25.234 "strip_size_kb": 0, 00:14:25.234 "state": "online", 00:14:25.234 "raid_level": "raid1", 00:14:25.234 "superblock": true, 00:14:25.234 "num_base_bdevs": 4, 00:14:25.234 "num_base_bdevs_discovered": 2, 00:14:25.234 "num_base_bdevs_operational": 2, 00:14:25.234 "base_bdevs_list": [ 00:14:25.234 { 00:14:25.234 "name": null, 00:14:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.234 "is_configured": false, 00:14:25.234 "data_offset": 0, 00:14:25.234 "data_size": 63488 00:14:25.234 }, 00:14:25.234 { 00:14:25.234 "name": null, 00:14:25.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.235 "is_configured": false, 00:14:25.235 "data_offset": 2048, 00:14:25.235 "data_size": 63488 00:14:25.235 }, 00:14:25.235 { 00:14:25.235 "name": "BaseBdev3", 00:14:25.235 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:25.235 "is_configured": true, 00:14:25.235 "data_offset": 2048, 00:14:25.235 "data_size": 63488 00:14:25.235 }, 00:14:25.235 { 00:14:25.235 "name": "BaseBdev4", 00:14:25.235 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:25.235 "is_configured": true, 00:14:25.235 "data_offset": 2048, 00:14:25.235 "data_size": 63488 00:14:25.235 } 00:14:25.235 ] 00:14:25.235 }' 00:14:25.235 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.235 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.803 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.804 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.804 17:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.804 [2024-10-25 17:55:43.988603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.804 [2024-10-25 17:55:43.988697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.804 [2024-10-25 17:55:43.988732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:25.804 [2024-10-25 17:55:43.988743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.804 [2024-10-25 17:55:43.989333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.804 [2024-10-25 17:55:43.989360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.804 [2024-10-25 17:55:43.989477] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:25.804 [2024-10-25 17:55:43.989497] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:25.804 [2024-10-25 17:55:43.989512] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:25.804 [2024-10-25 17:55:43.989545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.804 [2024-10-25 17:55:44.005351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:25.804 spare 00:14:25.804 17:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.804 17:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:25.804 [2024-10-25 17:55:44.007516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.745 "name": "raid_bdev1", 00:14:26.745 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:26.745 "strip_size_kb": 0, 00:14:26.745 "state": "online", 00:14:26.745 "raid_level": "raid1", 00:14:26.745 "superblock": true, 00:14:26.745 "num_base_bdevs": 4, 00:14:26.745 "num_base_bdevs_discovered": 3, 00:14:26.745 "num_base_bdevs_operational": 3, 00:14:26.745 "process": { 00:14:26.745 "type": "rebuild", 00:14:26.745 "target": "spare", 00:14:26.745 "progress": { 00:14:26.745 "blocks": 20480, 00:14:26.745 "percent": 32 00:14:26.745 } 00:14:26.745 }, 00:14:26.745 "base_bdevs_list": [ 00:14:26.745 { 00:14:26.745 "name": "spare", 00:14:26.745 "uuid": "a51a310f-0b6f-561d-ab0e-5dc6bd6b18fe", 00:14:26.745 "is_configured": true, 00:14:26.745 "data_offset": 2048, 00:14:26.745 "data_size": 63488 00:14:26.745 }, 00:14:26.745 { 00:14:26.745 "name": null, 00:14:26.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.745 "is_configured": false, 00:14:26.745 "data_offset": 2048, 00:14:26.745 "data_size": 63488 00:14:26.745 }, 00:14:26.745 { 00:14:26.745 "name": "BaseBdev3", 00:14:26.745 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:26.745 "is_configured": true, 00:14:26.745 "data_offset": 2048, 00:14:26.745 "data_size": 63488 00:14:26.745 }, 00:14:26.745 { 00:14:26.745 "name": "BaseBdev4", 00:14:26.745 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:26.745 "is_configured": true, 00:14:26.745 "data_offset": 2048, 00:14:26.745 "data_size": 63488 00:14:26.745 } 00:14:26.745 ] 00:14:26.745 }' 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.745 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.745 [2024-10-25 17:55:45.171892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.005 [2024-10-25 17:55:45.217589] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.005 [2024-10-25 17:55:45.217752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.005 [2024-10-25 17:55:45.217773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.005 [2024-10-25 17:55:45.217784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.005 "name": "raid_bdev1", 00:14:27.005 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:27.005 "strip_size_kb": 0, 00:14:27.005 "state": "online", 00:14:27.005 "raid_level": "raid1", 00:14:27.005 "superblock": true, 00:14:27.005 "num_base_bdevs": 4, 00:14:27.005 "num_base_bdevs_discovered": 2, 00:14:27.005 "num_base_bdevs_operational": 2, 00:14:27.005 "base_bdevs_list": [ 00:14:27.005 { 00:14:27.005 "name": null, 00:14:27.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.005 "is_configured": false, 00:14:27.005 "data_offset": 0, 00:14:27.005 "data_size": 63488 00:14:27.005 }, 00:14:27.005 { 00:14:27.005 "name": null, 00:14:27.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.005 "is_configured": false, 00:14:27.005 "data_offset": 2048, 00:14:27.005 "data_size": 63488 00:14:27.005 }, 00:14:27.005 { 00:14:27.005 "name": "BaseBdev3", 00:14:27.005 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:27.005 "is_configured": true, 00:14:27.005 "data_offset": 2048, 00:14:27.005 "data_size": 63488 00:14:27.005 }, 00:14:27.005 { 00:14:27.005 "name": "BaseBdev4", 00:14:27.005 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:27.005 "is_configured": true, 00:14:27.005 "data_offset": 2048, 00:14:27.005 "data_size": 63488 00:14:27.005 } 00:14:27.005 ] 00:14:27.005 }' 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.005 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.265 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.265 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.265 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.265 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.524 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.525 "name": "raid_bdev1", 00:14:27.525 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:27.525 "strip_size_kb": 0, 00:14:27.525 "state": "online", 00:14:27.525 "raid_level": "raid1", 00:14:27.525 "superblock": true, 00:14:27.525 "num_base_bdevs": 4, 00:14:27.525 "num_base_bdevs_discovered": 2, 00:14:27.525 "num_base_bdevs_operational": 2, 00:14:27.525 "base_bdevs_list": [ 00:14:27.525 { 00:14:27.525 "name": null, 00:14:27.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.525 "is_configured": false, 00:14:27.525 "data_offset": 0, 00:14:27.525 "data_size": 63488 00:14:27.525 }, 00:14:27.525 { 00:14:27.525 "name": null, 00:14:27.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.525 "is_configured": false, 00:14:27.525 "data_offset": 2048, 00:14:27.525 "data_size": 63488 00:14:27.525 }, 00:14:27.525 { 00:14:27.525 "name": "BaseBdev3", 00:14:27.525 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:27.525 "is_configured": true, 00:14:27.525 "data_offset": 2048, 00:14:27.525 "data_size": 63488 00:14:27.525 }, 00:14:27.525 { 00:14:27.525 "name": "BaseBdev4", 00:14:27.525 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:27.525 "is_configured": true, 00:14:27.525 "data_offset": 2048, 00:14:27.525 "data_size": 63488 00:14:27.525 } 00:14:27.525 ] 00:14:27.525 }' 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.525 [2024-10-25 17:55:45.865048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:27.525 [2024-10-25 17:55:45.865131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.525 [2024-10-25 17:55:45.865155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:27.525 [2024-10-25 17:55:45.865168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.525 [2024-10-25 17:55:45.865686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.525 [2024-10-25 17:55:45.865719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.525 [2024-10-25 17:55:45.865816] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:27.525 [2024-10-25 17:55:45.865855] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:27.525 [2024-10-25 17:55:45.865863] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:27.525 [2024-10-25 17:55:45.865877] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:27.525 BaseBdev1 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.525 17:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.745 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.745 "name": "raid_bdev1", 00:14:28.745 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:28.745 "strip_size_kb": 0, 00:14:28.745 "state": "online", 00:14:28.745 "raid_level": "raid1", 00:14:28.745 "superblock": true, 00:14:28.745 "num_base_bdevs": 4, 00:14:28.745 "num_base_bdevs_discovered": 2, 00:14:28.745 "num_base_bdevs_operational": 2, 00:14:28.745 "base_bdevs_list": [ 00:14:28.745 { 00:14:28.745 "name": null, 00:14:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.745 "is_configured": false, 00:14:28.745 "data_offset": 0, 00:14:28.745 "data_size": 63488 00:14:28.745 }, 00:14:28.745 { 00:14:28.745 "name": null, 00:14:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.745 "is_configured": false, 00:14:28.745 "data_offset": 2048, 00:14:28.745 "data_size": 63488 00:14:28.745 }, 00:14:28.745 { 00:14:28.745 "name": "BaseBdev3", 00:14:28.745 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:28.745 "is_configured": true, 00:14:28.745 "data_offset": 2048, 00:14:28.745 "data_size": 63488 00:14:28.745 }, 00:14:28.745 { 00:14:28.745 "name": "BaseBdev4", 00:14:28.745 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:28.745 "is_configured": true, 00:14:28.745 "data_offset": 2048, 00:14:28.745 "data_size": 63488 00:14:28.745 } 00:14:28.745 ] 00:14:28.745 }' 00:14:28.745 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.745 17:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.005 "name": "raid_bdev1", 00:14:29.005 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:29.005 "strip_size_kb": 0, 00:14:29.005 "state": "online", 00:14:29.005 "raid_level": "raid1", 00:14:29.005 "superblock": true, 00:14:29.005 "num_base_bdevs": 4, 00:14:29.005 "num_base_bdevs_discovered": 2, 00:14:29.005 "num_base_bdevs_operational": 2, 00:14:29.005 "base_bdevs_list": [ 00:14:29.005 { 00:14:29.005 "name": null, 00:14:29.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.005 "is_configured": false, 00:14:29.005 "data_offset": 0, 00:14:29.005 "data_size": 63488 00:14:29.005 }, 00:14:29.005 { 00:14:29.005 "name": null, 00:14:29.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.005 "is_configured": false, 00:14:29.005 "data_offset": 2048, 00:14:29.005 "data_size": 63488 00:14:29.005 }, 00:14:29.005 { 00:14:29.005 "name": "BaseBdev3", 00:14:29.005 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:29.005 "is_configured": true, 00:14:29.005 "data_offset": 2048, 00:14:29.005 "data_size": 63488 00:14:29.005 }, 00:14:29.005 { 00:14:29.005 "name": "BaseBdev4", 00:14:29.005 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:29.005 "is_configured": true, 00:14:29.005 "data_offset": 2048, 00:14:29.005 "data_size": 63488 00:14:29.005 } 00:14:29.005 ] 00:14:29.005 }' 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.005 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.266 [2024-10-25 17:55:47.476598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.266 [2024-10-25 17:55:47.476849] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:29.266 [2024-10-25 17:55:47.476865] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:29.266 request: 00:14:29.266 { 00:14:29.266 "base_bdev": "BaseBdev1", 00:14:29.266 "raid_bdev": "raid_bdev1", 00:14:29.266 "method": "bdev_raid_add_base_bdev", 00:14:29.266 "req_id": 1 00:14:29.266 } 00:14:29.266 Got JSON-RPC error response 00:14:29.266 response: 00:14:29.266 { 00:14:29.266 "code": -22, 00:14:29.266 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:29.266 } 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.266 17:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.206 "name": "raid_bdev1", 00:14:30.206 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:30.206 "strip_size_kb": 0, 00:14:30.206 "state": "online", 00:14:30.206 "raid_level": "raid1", 00:14:30.206 "superblock": true, 00:14:30.206 "num_base_bdevs": 4, 00:14:30.206 "num_base_bdevs_discovered": 2, 00:14:30.206 "num_base_bdevs_operational": 2, 00:14:30.206 "base_bdevs_list": [ 00:14:30.206 { 00:14:30.206 "name": null, 00:14:30.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.206 "is_configured": false, 00:14:30.206 "data_offset": 0, 00:14:30.206 "data_size": 63488 00:14:30.206 }, 00:14:30.206 { 00:14:30.206 "name": null, 00:14:30.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.206 "is_configured": false, 00:14:30.206 "data_offset": 2048, 00:14:30.206 "data_size": 63488 00:14:30.206 }, 00:14:30.206 { 00:14:30.206 "name": "BaseBdev3", 00:14:30.206 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:30.206 "is_configured": true, 00:14:30.206 "data_offset": 2048, 00:14:30.206 "data_size": 63488 00:14:30.206 }, 00:14:30.206 { 00:14:30.206 "name": "BaseBdev4", 00:14:30.206 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:30.206 "is_configured": true, 00:14:30.206 "data_offset": 2048, 00:14:30.206 "data_size": 63488 00:14:30.206 } 00:14:30.206 ] 00:14:30.206 }' 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.206 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.777 "name": "raid_bdev1", 00:14:30.777 "uuid": "5e7af350-d3fb-44b1-b89f-a17b6e3de957", 00:14:30.777 "strip_size_kb": 0, 00:14:30.777 "state": "online", 00:14:30.777 "raid_level": "raid1", 00:14:30.777 "superblock": true, 00:14:30.777 "num_base_bdevs": 4, 00:14:30.777 "num_base_bdevs_discovered": 2, 00:14:30.777 "num_base_bdevs_operational": 2, 00:14:30.777 "base_bdevs_list": [ 00:14:30.777 { 00:14:30.777 "name": null, 00:14:30.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.777 "is_configured": false, 00:14:30.777 "data_offset": 0, 00:14:30.777 "data_size": 63488 00:14:30.777 }, 00:14:30.777 { 00:14:30.777 "name": null, 00:14:30.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.777 "is_configured": false, 00:14:30.777 "data_offset": 2048, 00:14:30.777 "data_size": 63488 00:14:30.777 }, 00:14:30.777 { 00:14:30.777 "name": "BaseBdev3", 00:14:30.777 "uuid": "7d739398-dbdb-529e-8fc7-c00cf2249c40", 00:14:30.777 "is_configured": true, 00:14:30.777 "data_offset": 2048, 00:14:30.777 "data_size": 63488 00:14:30.777 }, 00:14:30.777 { 00:14:30.777 "name": "BaseBdev4", 00:14:30.777 "uuid": "4cae1952-3c19-5ffe-b72c-e7d03a023121", 00:14:30.777 "is_configured": true, 00:14:30.777 "data_offset": 2048, 00:14:30.777 "data_size": 63488 00:14:30.777 } 00:14:30.777 ] 00:14:30.777 }' 00:14:30.777 17:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79066 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79066 ']' 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79066 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79066 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.777 killing process with pid 79066 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79066' 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79066 00:14:30.777 Received shutdown signal, test time was about 18.089171 seconds 00:14:30.777 00:14:30.777 Latency(us) 00:14:30.777 [2024-10-25T17:55:49.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.777 [2024-10-25T17:55:49.213Z] =================================================================================================================== 00:14:30.777 [2024-10-25T17:55:49.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.777 [2024-10-25 17:55:49.129220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.777 17:55:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79066 00:14:30.777 [2024-10-25 17:55:49.129407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.777 [2024-10-25 17:55:49.129501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.777 [2024-10-25 17:55:49.129515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:31.348 [2024-10-25 17:55:49.590978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.738 17:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.738 00:14:32.738 real 0m21.930s 00:14:32.738 user 0m28.465s 00:14:32.738 sys 0m2.783s 00:14:32.738 17:55:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.738 17:55:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.738 ************************************ 00:14:32.738 END TEST raid_rebuild_test_sb_io 00:14:32.738 ************************************ 00:14:32.738 17:55:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:32.738 17:55:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:32.738 17:55:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:32.738 17:55:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.738 17:55:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.738 ************************************ 00:14:32.738 START TEST raid5f_state_function_test 00:14:32.738 ************************************ 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79796 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:32.738 Process raid pid: 79796 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79796' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79796 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79796 ']' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.738 17:55:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.738 [2024-10-25 17:55:51.051998] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:32.738 [2024-10-25 17:55:51.052119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.997 [2024-10-25 17:55:51.227124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.997 [2024-10-25 17:55:51.370985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.257 [2024-10-25 17:55:51.615043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.257 [2024-10-25 17:55:51.615103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.518 [2024-10-25 17:55:51.886196] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.518 [2024-10-25 17:55:51.886277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.518 [2024-10-25 17:55:51.886289] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.518 [2024-10-25 17:55:51.886300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.518 [2024-10-25 17:55:51.886306] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.518 [2024-10-25 17:55:51.886315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.518 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.518 "name": "Existed_Raid", 00:14:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.518 "strip_size_kb": 64, 00:14:33.518 "state": "configuring", 00:14:33.518 "raid_level": "raid5f", 00:14:33.519 "superblock": false, 00:14:33.519 "num_base_bdevs": 3, 00:14:33.519 "num_base_bdevs_discovered": 0, 00:14:33.519 "num_base_bdevs_operational": 3, 00:14:33.519 "base_bdevs_list": [ 00:14:33.519 { 00:14:33.519 "name": "BaseBdev1", 00:14:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.519 "is_configured": false, 00:14:33.519 "data_offset": 0, 00:14:33.519 "data_size": 0 00:14:33.519 }, 00:14:33.519 { 00:14:33.519 "name": "BaseBdev2", 00:14:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.519 "is_configured": false, 00:14:33.519 "data_offset": 0, 00:14:33.519 "data_size": 0 00:14:33.519 }, 00:14:33.519 { 00:14:33.519 "name": "BaseBdev3", 00:14:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.519 "is_configured": false, 00:14:33.519 "data_offset": 0, 00:14:33.519 "data_size": 0 00:14:33.519 } 00:14:33.519 ] 00:14:33.519 }' 00:14:33.519 17:55:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.519 17:55:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 [2024-10-25 17:55:52.333381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.090 [2024-10-25 17:55:52.333441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 [2024-10-25 17:55:52.345385] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.090 [2024-10-25 17:55:52.345457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.090 [2024-10-25 17:55:52.345467] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.090 [2024-10-25 17:55:52.345477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.090 [2024-10-25 17:55:52.345484] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.090 [2024-10-25 17:55:52.345493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 [2024-10-25 17:55:52.399336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.090 BaseBdev1 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.090 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.090 [ 00:14:34.090 { 00:14:34.090 "name": "BaseBdev1", 00:14:34.090 "aliases": [ 00:14:34.090 "6dc6271b-c3f0-4fc4-925a-374b5f20a23f" 00:14:34.090 ], 00:14:34.090 "product_name": "Malloc disk", 00:14:34.090 "block_size": 512, 00:14:34.090 "num_blocks": 65536, 00:14:34.090 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:34.090 "assigned_rate_limits": { 00:14:34.090 "rw_ios_per_sec": 0, 00:14:34.090 "rw_mbytes_per_sec": 0, 00:14:34.090 "r_mbytes_per_sec": 0, 00:14:34.090 "w_mbytes_per_sec": 0 00:14:34.090 }, 00:14:34.090 "claimed": true, 00:14:34.090 "claim_type": "exclusive_write", 00:14:34.090 "zoned": false, 00:14:34.090 "supported_io_types": { 00:14:34.090 "read": true, 00:14:34.090 "write": true, 00:14:34.090 "unmap": true, 00:14:34.090 "flush": true, 00:14:34.090 "reset": true, 00:14:34.090 "nvme_admin": false, 00:14:34.090 "nvme_io": false, 00:14:34.090 "nvme_io_md": false, 00:14:34.090 "write_zeroes": true, 00:14:34.090 "zcopy": true, 00:14:34.090 "get_zone_info": false, 00:14:34.090 "zone_management": false, 00:14:34.090 "zone_append": false, 00:14:34.090 "compare": false, 00:14:34.091 "compare_and_write": false, 00:14:34.091 "abort": true, 00:14:34.091 "seek_hole": false, 00:14:34.091 "seek_data": false, 00:14:34.091 "copy": true, 00:14:34.091 "nvme_iov_md": false 00:14:34.091 }, 00:14:34.091 "memory_domains": [ 00:14:34.091 { 00:14:34.091 "dma_device_id": "system", 00:14:34.091 "dma_device_type": 1 00:14:34.091 }, 00:14:34.091 { 00:14:34.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.091 "dma_device_type": 2 00:14:34.091 } 00:14:34.091 ], 00:14:34.091 "driver_specific": {} 00:14:34.091 } 00:14:34.091 ] 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.091 "name": "Existed_Raid", 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.091 "strip_size_kb": 64, 00:14:34.091 "state": "configuring", 00:14:34.091 "raid_level": "raid5f", 00:14:34.091 "superblock": false, 00:14:34.091 "num_base_bdevs": 3, 00:14:34.091 "num_base_bdevs_discovered": 1, 00:14:34.091 "num_base_bdevs_operational": 3, 00:14:34.091 "base_bdevs_list": [ 00:14:34.091 { 00:14:34.091 "name": "BaseBdev1", 00:14:34.091 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:34.091 "is_configured": true, 00:14:34.091 "data_offset": 0, 00:14:34.091 "data_size": 65536 00:14:34.091 }, 00:14:34.091 { 00:14:34.091 "name": "BaseBdev2", 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.091 "is_configured": false, 00:14:34.091 "data_offset": 0, 00:14:34.091 "data_size": 0 00:14:34.091 }, 00:14:34.091 { 00:14:34.091 "name": "BaseBdev3", 00:14:34.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.091 "is_configured": false, 00:14:34.091 "data_offset": 0, 00:14:34.091 "data_size": 0 00:14:34.091 } 00:14:34.091 ] 00:14:34.091 }' 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.091 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.674 [2024-10-25 17:55:52.862607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.674 [2024-10-25 17:55:52.862696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.674 [2024-10-25 17:55:52.874684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.674 [2024-10-25 17:55:52.876903] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.674 [2024-10-25 17:55:52.876958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.674 [2024-10-25 17:55:52.876970] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.674 [2024-10-25 17:55:52.876980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.674 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.675 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.675 "name": "Existed_Raid", 00:14:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.675 "strip_size_kb": 64, 00:14:34.676 "state": "configuring", 00:14:34.676 "raid_level": "raid5f", 00:14:34.676 "superblock": false, 00:14:34.676 "num_base_bdevs": 3, 00:14:34.676 "num_base_bdevs_discovered": 1, 00:14:34.676 "num_base_bdevs_operational": 3, 00:14:34.676 "base_bdevs_list": [ 00:14:34.676 { 00:14:34.676 "name": "BaseBdev1", 00:14:34.676 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:34.676 "is_configured": true, 00:14:34.676 "data_offset": 0, 00:14:34.676 "data_size": 65536 00:14:34.676 }, 00:14:34.676 { 00:14:34.676 "name": "BaseBdev2", 00:14:34.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.676 "is_configured": false, 00:14:34.676 "data_offset": 0, 00:14:34.676 "data_size": 0 00:14:34.676 }, 00:14:34.676 { 00:14:34.676 "name": "BaseBdev3", 00:14:34.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.676 "is_configured": false, 00:14:34.676 "data_offset": 0, 00:14:34.676 "data_size": 0 00:14:34.676 } 00:14:34.676 ] 00:14:34.676 }' 00:14:34.677 17:55:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.677 17:55:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.945 [2024-10-25 17:55:53.360055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.945 BaseBdev2 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.945 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.206 [ 00:14:35.206 { 00:14:35.206 "name": "BaseBdev2", 00:14:35.206 "aliases": [ 00:14:35.206 "9a2991b1-e8eb-41d8-88a8-146a0566b6cd" 00:14:35.206 ], 00:14:35.206 "product_name": "Malloc disk", 00:14:35.206 "block_size": 512, 00:14:35.206 "num_blocks": 65536, 00:14:35.206 "uuid": "9a2991b1-e8eb-41d8-88a8-146a0566b6cd", 00:14:35.206 "assigned_rate_limits": { 00:14:35.206 "rw_ios_per_sec": 0, 00:14:35.206 "rw_mbytes_per_sec": 0, 00:14:35.206 "r_mbytes_per_sec": 0, 00:14:35.206 "w_mbytes_per_sec": 0 00:14:35.206 }, 00:14:35.206 "claimed": true, 00:14:35.206 "claim_type": "exclusive_write", 00:14:35.206 "zoned": false, 00:14:35.206 "supported_io_types": { 00:14:35.206 "read": true, 00:14:35.206 "write": true, 00:14:35.206 "unmap": true, 00:14:35.206 "flush": true, 00:14:35.206 "reset": true, 00:14:35.206 "nvme_admin": false, 00:14:35.206 "nvme_io": false, 00:14:35.206 "nvme_io_md": false, 00:14:35.206 "write_zeroes": true, 00:14:35.206 "zcopy": true, 00:14:35.206 "get_zone_info": false, 00:14:35.206 "zone_management": false, 00:14:35.206 "zone_append": false, 00:14:35.206 "compare": false, 00:14:35.206 "compare_and_write": false, 00:14:35.206 "abort": true, 00:14:35.206 "seek_hole": false, 00:14:35.206 "seek_data": false, 00:14:35.206 "copy": true, 00:14:35.206 "nvme_iov_md": false 00:14:35.206 }, 00:14:35.206 "memory_domains": [ 00:14:35.206 { 00:14:35.206 "dma_device_id": "system", 00:14:35.206 "dma_device_type": 1 00:14:35.206 }, 00:14:35.206 { 00:14:35.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.206 "dma_device_type": 2 00:14:35.206 } 00:14:35.206 ], 00:14:35.206 "driver_specific": {} 00:14:35.206 } 00:14:35.206 ] 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.206 "name": "Existed_Raid", 00:14:35.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.206 "strip_size_kb": 64, 00:14:35.206 "state": "configuring", 00:14:35.206 "raid_level": "raid5f", 00:14:35.206 "superblock": false, 00:14:35.206 "num_base_bdevs": 3, 00:14:35.206 "num_base_bdevs_discovered": 2, 00:14:35.206 "num_base_bdevs_operational": 3, 00:14:35.206 "base_bdevs_list": [ 00:14:35.206 { 00:14:35.206 "name": "BaseBdev1", 00:14:35.206 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:35.206 "is_configured": true, 00:14:35.206 "data_offset": 0, 00:14:35.206 "data_size": 65536 00:14:35.206 }, 00:14:35.206 { 00:14:35.206 "name": "BaseBdev2", 00:14:35.206 "uuid": "9a2991b1-e8eb-41d8-88a8-146a0566b6cd", 00:14:35.206 "is_configured": true, 00:14:35.206 "data_offset": 0, 00:14:35.206 "data_size": 65536 00:14:35.206 }, 00:14:35.206 { 00:14:35.206 "name": "BaseBdev3", 00:14:35.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.206 "is_configured": false, 00:14:35.206 "data_offset": 0, 00:14:35.206 "data_size": 0 00:14:35.206 } 00:14:35.206 ] 00:14:35.206 }' 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.206 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.466 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:35.466 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.466 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.726 [2024-10-25 17:55:53.907479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.726 [2024-10-25 17:55:53.907568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:35.726 [2024-10-25 17:55:53.907584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:35.726 [2024-10-25 17:55:53.907905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:35.726 [2024-10-25 17:55:53.913330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:35.726 [2024-10-25 17:55:53.913359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:35.726 [2024-10-25 17:55:53.913743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.726 BaseBdev3 00:14:35.726 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.726 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:35.726 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.727 [ 00:14:35.727 { 00:14:35.727 "name": "BaseBdev3", 00:14:35.727 "aliases": [ 00:14:35.727 "74e4c2f1-ef0c-4c49-8d4a-6342d1bcb79e" 00:14:35.727 ], 00:14:35.727 "product_name": "Malloc disk", 00:14:35.727 "block_size": 512, 00:14:35.727 "num_blocks": 65536, 00:14:35.727 "uuid": "74e4c2f1-ef0c-4c49-8d4a-6342d1bcb79e", 00:14:35.727 "assigned_rate_limits": { 00:14:35.727 "rw_ios_per_sec": 0, 00:14:35.727 "rw_mbytes_per_sec": 0, 00:14:35.727 "r_mbytes_per_sec": 0, 00:14:35.727 "w_mbytes_per_sec": 0 00:14:35.727 }, 00:14:35.727 "claimed": true, 00:14:35.727 "claim_type": "exclusive_write", 00:14:35.727 "zoned": false, 00:14:35.727 "supported_io_types": { 00:14:35.727 "read": true, 00:14:35.727 "write": true, 00:14:35.727 "unmap": true, 00:14:35.727 "flush": true, 00:14:35.727 "reset": true, 00:14:35.727 "nvme_admin": false, 00:14:35.727 "nvme_io": false, 00:14:35.727 "nvme_io_md": false, 00:14:35.727 "write_zeroes": true, 00:14:35.727 "zcopy": true, 00:14:35.727 "get_zone_info": false, 00:14:35.727 "zone_management": false, 00:14:35.727 "zone_append": false, 00:14:35.727 "compare": false, 00:14:35.727 "compare_and_write": false, 00:14:35.727 "abort": true, 00:14:35.727 "seek_hole": false, 00:14:35.727 "seek_data": false, 00:14:35.727 "copy": true, 00:14:35.727 "nvme_iov_md": false 00:14:35.727 }, 00:14:35.727 "memory_domains": [ 00:14:35.727 { 00:14:35.727 "dma_device_id": "system", 00:14:35.727 "dma_device_type": 1 00:14:35.727 }, 00:14:35.727 { 00:14:35.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.727 "dma_device_type": 2 00:14:35.727 } 00:14:35.727 ], 00:14:35.727 "driver_specific": {} 00:14:35.727 } 00:14:35.727 ] 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.727 "name": "Existed_Raid", 00:14:35.727 "uuid": "2fcc7fea-67fd-42e9-84b7-5e603eb6bd86", 00:14:35.727 "strip_size_kb": 64, 00:14:35.727 "state": "online", 00:14:35.727 "raid_level": "raid5f", 00:14:35.727 "superblock": false, 00:14:35.727 "num_base_bdevs": 3, 00:14:35.727 "num_base_bdevs_discovered": 3, 00:14:35.727 "num_base_bdevs_operational": 3, 00:14:35.727 "base_bdevs_list": [ 00:14:35.727 { 00:14:35.727 "name": "BaseBdev1", 00:14:35.727 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:35.727 "is_configured": true, 00:14:35.727 "data_offset": 0, 00:14:35.727 "data_size": 65536 00:14:35.727 }, 00:14:35.727 { 00:14:35.727 "name": "BaseBdev2", 00:14:35.727 "uuid": "9a2991b1-e8eb-41d8-88a8-146a0566b6cd", 00:14:35.727 "is_configured": true, 00:14:35.727 "data_offset": 0, 00:14:35.727 "data_size": 65536 00:14:35.727 }, 00:14:35.727 { 00:14:35.727 "name": "BaseBdev3", 00:14:35.727 "uuid": "74e4c2f1-ef0c-4c49-8d4a-6342d1bcb79e", 00:14:35.727 "is_configured": true, 00:14:35.727 "data_offset": 0, 00:14:35.727 "data_size": 65536 00:14:35.727 } 00:14:35.727 ] 00:14:35.727 }' 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.727 17:55:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.988 [2024-10-25 17:55:54.312794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.988 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.988 "name": "Existed_Raid", 00:14:35.988 "aliases": [ 00:14:35.988 "2fcc7fea-67fd-42e9-84b7-5e603eb6bd86" 00:14:35.988 ], 00:14:35.988 "product_name": "Raid Volume", 00:14:35.988 "block_size": 512, 00:14:35.988 "num_blocks": 131072, 00:14:35.988 "uuid": "2fcc7fea-67fd-42e9-84b7-5e603eb6bd86", 00:14:35.988 "assigned_rate_limits": { 00:14:35.988 "rw_ios_per_sec": 0, 00:14:35.988 "rw_mbytes_per_sec": 0, 00:14:35.988 "r_mbytes_per_sec": 0, 00:14:35.988 "w_mbytes_per_sec": 0 00:14:35.988 }, 00:14:35.988 "claimed": false, 00:14:35.988 "zoned": false, 00:14:35.988 "supported_io_types": { 00:14:35.988 "read": true, 00:14:35.988 "write": true, 00:14:35.988 "unmap": false, 00:14:35.988 "flush": false, 00:14:35.988 "reset": true, 00:14:35.988 "nvme_admin": false, 00:14:35.988 "nvme_io": false, 00:14:35.988 "nvme_io_md": false, 00:14:35.988 "write_zeroes": true, 00:14:35.988 "zcopy": false, 00:14:35.988 "get_zone_info": false, 00:14:35.988 "zone_management": false, 00:14:35.988 "zone_append": false, 00:14:35.988 "compare": false, 00:14:35.988 "compare_and_write": false, 00:14:35.988 "abort": false, 00:14:35.988 "seek_hole": false, 00:14:35.988 "seek_data": false, 00:14:35.988 "copy": false, 00:14:35.988 "nvme_iov_md": false 00:14:35.988 }, 00:14:35.988 "driver_specific": { 00:14:35.988 "raid": { 00:14:35.988 "uuid": "2fcc7fea-67fd-42e9-84b7-5e603eb6bd86", 00:14:35.988 "strip_size_kb": 64, 00:14:35.988 "state": "online", 00:14:35.988 "raid_level": "raid5f", 00:14:35.988 "superblock": false, 00:14:35.988 "num_base_bdevs": 3, 00:14:35.988 "num_base_bdevs_discovered": 3, 00:14:35.988 "num_base_bdevs_operational": 3, 00:14:35.988 "base_bdevs_list": [ 00:14:35.988 { 00:14:35.988 "name": "BaseBdev1", 00:14:35.988 "uuid": "6dc6271b-c3f0-4fc4-925a-374b5f20a23f", 00:14:35.988 "is_configured": true, 00:14:35.988 "data_offset": 0, 00:14:35.988 "data_size": 65536 00:14:35.988 }, 00:14:35.989 { 00:14:35.989 "name": "BaseBdev2", 00:14:35.989 "uuid": "9a2991b1-e8eb-41d8-88a8-146a0566b6cd", 00:14:35.989 "is_configured": true, 00:14:35.989 "data_offset": 0, 00:14:35.989 "data_size": 65536 00:14:35.989 }, 00:14:35.989 { 00:14:35.989 "name": "BaseBdev3", 00:14:35.989 "uuid": "74e4c2f1-ef0c-4c49-8d4a-6342d1bcb79e", 00:14:35.989 "is_configured": true, 00:14:35.989 "data_offset": 0, 00:14:35.989 "data_size": 65536 00:14:35.989 } 00:14:35.989 ] 00:14:35.989 } 00:14:35.989 } 00:14:35.989 }' 00:14:35.989 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.989 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:35.989 BaseBdev2 00:14:35.989 BaseBdev3' 00:14:35.989 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.250 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.250 [2024-10-25 17:55:54.612649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.511 "name": "Existed_Raid", 00:14:36.511 "uuid": "2fcc7fea-67fd-42e9-84b7-5e603eb6bd86", 00:14:36.511 "strip_size_kb": 64, 00:14:36.511 "state": "online", 00:14:36.511 "raid_level": "raid5f", 00:14:36.511 "superblock": false, 00:14:36.511 "num_base_bdevs": 3, 00:14:36.511 "num_base_bdevs_discovered": 2, 00:14:36.511 "num_base_bdevs_operational": 2, 00:14:36.511 "base_bdevs_list": [ 00:14:36.511 { 00:14:36.511 "name": null, 00:14:36.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.511 "is_configured": false, 00:14:36.511 "data_offset": 0, 00:14:36.511 "data_size": 65536 00:14:36.511 }, 00:14:36.511 { 00:14:36.511 "name": "BaseBdev2", 00:14:36.511 "uuid": "9a2991b1-e8eb-41d8-88a8-146a0566b6cd", 00:14:36.511 "is_configured": true, 00:14:36.511 "data_offset": 0, 00:14:36.511 "data_size": 65536 00:14:36.511 }, 00:14:36.511 { 00:14:36.511 "name": "BaseBdev3", 00:14:36.511 "uuid": "74e4c2f1-ef0c-4c49-8d4a-6342d1bcb79e", 00:14:36.511 "is_configured": true, 00:14:36.511 "data_offset": 0, 00:14:36.511 "data_size": 65536 00:14:36.511 } 00:14:36.511 ] 00:14:36.511 }' 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.511 17:55:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.772 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.772 [2024-10-25 17:55:55.198916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.772 [2024-10-25 17:55:55.199136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.032 [2024-10-25 17:55:55.305660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.032 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.032 [2024-10-25 17:55:55.365697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.032 [2024-10-25 17:55:55.365907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 BaseBdev2 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 [ 00:14:37.293 { 00:14:37.293 "name": "BaseBdev2", 00:14:37.293 "aliases": [ 00:14:37.293 "173eaf7b-6662-4c71-bfec-786ab00ea998" 00:14:37.293 ], 00:14:37.293 "product_name": "Malloc disk", 00:14:37.293 "block_size": 512, 00:14:37.293 "num_blocks": 65536, 00:14:37.293 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:37.293 "assigned_rate_limits": { 00:14:37.293 "rw_ios_per_sec": 0, 00:14:37.293 "rw_mbytes_per_sec": 0, 00:14:37.293 "r_mbytes_per_sec": 0, 00:14:37.293 "w_mbytes_per_sec": 0 00:14:37.293 }, 00:14:37.293 "claimed": false, 00:14:37.293 "zoned": false, 00:14:37.293 "supported_io_types": { 00:14:37.293 "read": true, 00:14:37.293 "write": true, 00:14:37.293 "unmap": true, 00:14:37.293 "flush": true, 00:14:37.293 "reset": true, 00:14:37.293 "nvme_admin": false, 00:14:37.293 "nvme_io": false, 00:14:37.293 "nvme_io_md": false, 00:14:37.293 "write_zeroes": true, 00:14:37.293 "zcopy": true, 00:14:37.293 "get_zone_info": false, 00:14:37.293 "zone_management": false, 00:14:37.293 "zone_append": false, 00:14:37.293 "compare": false, 00:14:37.293 "compare_and_write": false, 00:14:37.293 "abort": true, 00:14:37.293 "seek_hole": false, 00:14:37.293 "seek_data": false, 00:14:37.293 "copy": true, 00:14:37.293 "nvme_iov_md": false 00:14:37.293 }, 00:14:37.293 "memory_domains": [ 00:14:37.293 { 00:14:37.293 "dma_device_id": "system", 00:14:37.293 "dma_device_type": 1 00:14:37.293 }, 00:14:37.293 { 00:14:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.293 "dma_device_type": 2 00:14:37.293 } 00:14:37.293 ], 00:14:37.293 "driver_specific": {} 00:14:37.293 } 00:14:37.293 ] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 BaseBdev3 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.293 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.294 [ 00:14:37.294 { 00:14:37.294 "name": "BaseBdev3", 00:14:37.294 "aliases": [ 00:14:37.294 "e88b6b85-764a-4a08-883b-06488966b6f6" 00:14:37.294 ], 00:14:37.294 "product_name": "Malloc disk", 00:14:37.294 "block_size": 512, 00:14:37.294 "num_blocks": 65536, 00:14:37.294 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:37.294 "assigned_rate_limits": { 00:14:37.294 "rw_ios_per_sec": 0, 00:14:37.294 "rw_mbytes_per_sec": 0, 00:14:37.294 "r_mbytes_per_sec": 0, 00:14:37.294 "w_mbytes_per_sec": 0 00:14:37.294 }, 00:14:37.294 "claimed": false, 00:14:37.294 "zoned": false, 00:14:37.294 "supported_io_types": { 00:14:37.294 "read": true, 00:14:37.294 "write": true, 00:14:37.294 "unmap": true, 00:14:37.294 "flush": true, 00:14:37.294 "reset": true, 00:14:37.294 "nvme_admin": false, 00:14:37.294 "nvme_io": false, 00:14:37.294 "nvme_io_md": false, 00:14:37.294 "write_zeroes": true, 00:14:37.294 "zcopy": true, 00:14:37.294 "get_zone_info": false, 00:14:37.294 "zone_management": false, 00:14:37.294 "zone_append": false, 00:14:37.294 "compare": false, 00:14:37.294 "compare_and_write": false, 00:14:37.294 "abort": true, 00:14:37.294 "seek_hole": false, 00:14:37.294 "seek_data": false, 00:14:37.294 "copy": true, 00:14:37.294 "nvme_iov_md": false 00:14:37.294 }, 00:14:37.294 "memory_domains": [ 00:14:37.294 { 00:14:37.294 "dma_device_id": "system", 00:14:37.294 "dma_device_type": 1 00:14:37.294 }, 00:14:37.294 { 00:14:37.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.294 "dma_device_type": 2 00:14:37.294 } 00:14:37.294 ], 00:14:37.294 "driver_specific": {} 00:14:37.294 } 00:14:37.294 ] 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.294 [2024-10-25 17:55:55.703997] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.294 [2024-10-25 17:55:55.704152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.294 [2024-10-25 17:55:55.704211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.294 [2024-10-25 17:55:55.706472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.294 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.555 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.555 "name": "Existed_Raid", 00:14:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.555 "strip_size_kb": 64, 00:14:37.555 "state": "configuring", 00:14:37.555 "raid_level": "raid5f", 00:14:37.555 "superblock": false, 00:14:37.555 "num_base_bdevs": 3, 00:14:37.555 "num_base_bdevs_discovered": 2, 00:14:37.555 "num_base_bdevs_operational": 3, 00:14:37.555 "base_bdevs_list": [ 00:14:37.555 { 00:14:37.555 "name": "BaseBdev1", 00:14:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.555 "is_configured": false, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 0 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev2", 00:14:37.555 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 65536 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev3", 00:14:37.555 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 65536 00:14:37.555 } 00:14:37.555 ] 00:14:37.555 }' 00:14:37.555 17:55:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.555 17:55:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.815 [2024-10-25 17:55:56.163150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.815 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.816 "name": "Existed_Raid", 00:14:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.816 "strip_size_kb": 64, 00:14:37.816 "state": "configuring", 00:14:37.816 "raid_level": "raid5f", 00:14:37.816 "superblock": false, 00:14:37.816 "num_base_bdevs": 3, 00:14:37.816 "num_base_bdevs_discovered": 1, 00:14:37.816 "num_base_bdevs_operational": 3, 00:14:37.816 "base_bdevs_list": [ 00:14:37.816 { 00:14:37.816 "name": "BaseBdev1", 00:14:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.816 "is_configured": false, 00:14:37.816 "data_offset": 0, 00:14:37.816 "data_size": 0 00:14:37.816 }, 00:14:37.816 { 00:14:37.816 "name": null, 00:14:37.816 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:37.816 "is_configured": false, 00:14:37.816 "data_offset": 0, 00:14:37.816 "data_size": 65536 00:14:37.816 }, 00:14:37.816 { 00:14:37.816 "name": "BaseBdev3", 00:14:37.816 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:37.816 "is_configured": true, 00:14:37.816 "data_offset": 0, 00:14:37.816 "data_size": 65536 00:14:37.816 } 00:14:37.816 ] 00:14:37.816 }' 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.816 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.396 [2024-10-25 17:55:56.697714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.396 BaseBdev1 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.396 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.396 [ 00:14:38.396 { 00:14:38.396 "name": "BaseBdev1", 00:14:38.396 "aliases": [ 00:14:38.396 "1e48b7f0-5746-4ae0-aa53-2cc431c03be8" 00:14:38.396 ], 00:14:38.396 "product_name": "Malloc disk", 00:14:38.396 "block_size": 512, 00:14:38.396 "num_blocks": 65536, 00:14:38.396 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:38.396 "assigned_rate_limits": { 00:14:38.396 "rw_ios_per_sec": 0, 00:14:38.396 "rw_mbytes_per_sec": 0, 00:14:38.396 "r_mbytes_per_sec": 0, 00:14:38.396 "w_mbytes_per_sec": 0 00:14:38.396 }, 00:14:38.396 "claimed": true, 00:14:38.396 "claim_type": "exclusive_write", 00:14:38.396 "zoned": false, 00:14:38.396 "supported_io_types": { 00:14:38.396 "read": true, 00:14:38.396 "write": true, 00:14:38.396 "unmap": true, 00:14:38.396 "flush": true, 00:14:38.396 "reset": true, 00:14:38.396 "nvme_admin": false, 00:14:38.396 "nvme_io": false, 00:14:38.396 "nvme_io_md": false, 00:14:38.396 "write_zeroes": true, 00:14:38.396 "zcopy": true, 00:14:38.396 "get_zone_info": false, 00:14:38.396 "zone_management": false, 00:14:38.396 "zone_append": false, 00:14:38.396 "compare": false, 00:14:38.396 "compare_and_write": false, 00:14:38.396 "abort": true, 00:14:38.396 "seek_hole": false, 00:14:38.396 "seek_data": false, 00:14:38.396 "copy": true, 00:14:38.396 "nvme_iov_md": false 00:14:38.396 }, 00:14:38.396 "memory_domains": [ 00:14:38.396 { 00:14:38.396 "dma_device_id": "system", 00:14:38.396 "dma_device_type": 1 00:14:38.396 }, 00:14:38.396 { 00:14:38.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.396 "dma_device_type": 2 00:14:38.396 } 00:14:38.396 ], 00:14:38.396 "driver_specific": {} 00:14:38.396 } 00:14:38.396 ] 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.397 "name": "Existed_Raid", 00:14:38.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.397 "strip_size_kb": 64, 00:14:38.397 "state": "configuring", 00:14:38.397 "raid_level": "raid5f", 00:14:38.397 "superblock": false, 00:14:38.397 "num_base_bdevs": 3, 00:14:38.397 "num_base_bdevs_discovered": 2, 00:14:38.397 "num_base_bdevs_operational": 3, 00:14:38.397 "base_bdevs_list": [ 00:14:38.397 { 00:14:38.397 "name": "BaseBdev1", 00:14:38.397 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:38.397 "is_configured": true, 00:14:38.397 "data_offset": 0, 00:14:38.397 "data_size": 65536 00:14:38.397 }, 00:14:38.397 { 00:14:38.397 "name": null, 00:14:38.397 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:38.397 "is_configured": false, 00:14:38.397 "data_offset": 0, 00:14:38.397 "data_size": 65536 00:14:38.397 }, 00:14:38.397 { 00:14:38.397 "name": "BaseBdev3", 00:14:38.397 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:38.397 "is_configured": true, 00:14:38.397 "data_offset": 0, 00:14:38.397 "data_size": 65536 00:14:38.397 } 00:14:38.397 ] 00:14:38.397 }' 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.397 17:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.981 [2024-10-25 17:55:57.252920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.981 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.982 "name": "Existed_Raid", 00:14:38.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.982 "strip_size_kb": 64, 00:14:38.982 "state": "configuring", 00:14:38.982 "raid_level": "raid5f", 00:14:38.982 "superblock": false, 00:14:38.982 "num_base_bdevs": 3, 00:14:38.982 "num_base_bdevs_discovered": 1, 00:14:38.982 "num_base_bdevs_operational": 3, 00:14:38.982 "base_bdevs_list": [ 00:14:38.982 { 00:14:38.982 "name": "BaseBdev1", 00:14:38.982 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:38.982 "is_configured": true, 00:14:38.982 "data_offset": 0, 00:14:38.982 "data_size": 65536 00:14:38.982 }, 00:14:38.982 { 00:14:38.982 "name": null, 00:14:38.982 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:38.982 "is_configured": false, 00:14:38.982 "data_offset": 0, 00:14:38.982 "data_size": 65536 00:14:38.982 }, 00:14:38.982 { 00:14:38.982 "name": null, 00:14:38.982 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:38.982 "is_configured": false, 00:14:38.982 "data_offset": 0, 00:14:38.982 "data_size": 65536 00:14:38.982 } 00:14:38.982 ] 00:14:38.982 }' 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.982 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.552 [2024-10-25 17:55:57.796632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.552 "name": "Existed_Raid", 00:14:39.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.552 "strip_size_kb": 64, 00:14:39.552 "state": "configuring", 00:14:39.552 "raid_level": "raid5f", 00:14:39.552 "superblock": false, 00:14:39.552 "num_base_bdevs": 3, 00:14:39.552 "num_base_bdevs_discovered": 2, 00:14:39.552 "num_base_bdevs_operational": 3, 00:14:39.552 "base_bdevs_list": [ 00:14:39.552 { 00:14:39.552 "name": "BaseBdev1", 00:14:39.552 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:39.552 "is_configured": true, 00:14:39.552 "data_offset": 0, 00:14:39.552 "data_size": 65536 00:14:39.552 }, 00:14:39.552 { 00:14:39.552 "name": null, 00:14:39.552 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:39.552 "is_configured": false, 00:14:39.552 "data_offset": 0, 00:14:39.552 "data_size": 65536 00:14:39.552 }, 00:14:39.552 { 00:14:39.552 "name": "BaseBdev3", 00:14:39.552 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:39.552 "is_configured": true, 00:14:39.552 "data_offset": 0, 00:14:39.552 "data_size": 65536 00:14:39.552 } 00:14:39.552 ] 00:14:39.552 }' 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.552 17:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.812 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.812 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.812 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.812 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.072 [2024-10-25 17:55:58.296579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.072 "name": "Existed_Raid", 00:14:40.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.072 "strip_size_kb": 64, 00:14:40.072 "state": "configuring", 00:14:40.072 "raid_level": "raid5f", 00:14:40.072 "superblock": false, 00:14:40.072 "num_base_bdevs": 3, 00:14:40.072 "num_base_bdevs_discovered": 1, 00:14:40.072 "num_base_bdevs_operational": 3, 00:14:40.072 "base_bdevs_list": [ 00:14:40.072 { 00:14:40.072 "name": null, 00:14:40.072 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:40.072 "is_configured": false, 00:14:40.072 "data_offset": 0, 00:14:40.072 "data_size": 65536 00:14:40.072 }, 00:14:40.072 { 00:14:40.072 "name": null, 00:14:40.072 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:40.072 "is_configured": false, 00:14:40.072 "data_offset": 0, 00:14:40.072 "data_size": 65536 00:14:40.072 }, 00:14:40.072 { 00:14:40.072 "name": "BaseBdev3", 00:14:40.072 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:40.072 "is_configured": true, 00:14:40.072 "data_offset": 0, 00:14:40.072 "data_size": 65536 00:14:40.072 } 00:14:40.072 ] 00:14:40.072 }' 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.072 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.642 [2024-10-25 17:55:58.871670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.642 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.642 "name": "Existed_Raid", 00:14:40.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.642 "strip_size_kb": 64, 00:14:40.642 "state": "configuring", 00:14:40.642 "raid_level": "raid5f", 00:14:40.642 "superblock": false, 00:14:40.642 "num_base_bdevs": 3, 00:14:40.642 "num_base_bdevs_discovered": 2, 00:14:40.643 "num_base_bdevs_operational": 3, 00:14:40.643 "base_bdevs_list": [ 00:14:40.643 { 00:14:40.643 "name": null, 00:14:40.643 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:40.643 "is_configured": false, 00:14:40.643 "data_offset": 0, 00:14:40.643 "data_size": 65536 00:14:40.643 }, 00:14:40.643 { 00:14:40.643 "name": "BaseBdev2", 00:14:40.643 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:40.643 "is_configured": true, 00:14:40.643 "data_offset": 0, 00:14:40.643 "data_size": 65536 00:14:40.643 }, 00:14:40.643 { 00:14:40.643 "name": "BaseBdev3", 00:14:40.643 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:40.643 "is_configured": true, 00:14:40.643 "data_offset": 0, 00:14:40.643 "data_size": 65536 00:14:40.643 } 00:14:40.643 ] 00:14:40.643 }' 00:14:40.643 17:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.643 17:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.902 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.902 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:40.902 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.902 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.902 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e48b7f0-5746-4ae0-aa53-2cc431c03be8 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.162 [2024-10-25 17:55:59.412958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:41.162 [2024-10-25 17:55:59.413014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:41.162 [2024-10-25 17:55:59.413024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:41.162 [2024-10-25 17:55:59.413303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:41.162 [2024-10-25 17:55:59.418615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:41.162 [2024-10-25 17:55:59.418640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:41.162 [2024-10-25 17:55:59.418915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.162 NewBaseBdev 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.162 [ 00:14:41.162 { 00:14:41.162 "name": "NewBaseBdev", 00:14:41.162 "aliases": [ 00:14:41.162 "1e48b7f0-5746-4ae0-aa53-2cc431c03be8" 00:14:41.162 ], 00:14:41.162 "product_name": "Malloc disk", 00:14:41.162 "block_size": 512, 00:14:41.162 "num_blocks": 65536, 00:14:41.162 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:41.162 "assigned_rate_limits": { 00:14:41.162 "rw_ios_per_sec": 0, 00:14:41.162 "rw_mbytes_per_sec": 0, 00:14:41.162 "r_mbytes_per_sec": 0, 00:14:41.162 "w_mbytes_per_sec": 0 00:14:41.162 }, 00:14:41.162 "claimed": true, 00:14:41.162 "claim_type": "exclusive_write", 00:14:41.162 "zoned": false, 00:14:41.162 "supported_io_types": { 00:14:41.162 "read": true, 00:14:41.162 "write": true, 00:14:41.162 "unmap": true, 00:14:41.162 "flush": true, 00:14:41.162 "reset": true, 00:14:41.162 "nvme_admin": false, 00:14:41.162 "nvme_io": false, 00:14:41.162 "nvme_io_md": false, 00:14:41.162 "write_zeroes": true, 00:14:41.162 "zcopy": true, 00:14:41.162 "get_zone_info": false, 00:14:41.162 "zone_management": false, 00:14:41.162 "zone_append": false, 00:14:41.162 "compare": false, 00:14:41.162 "compare_and_write": false, 00:14:41.162 "abort": true, 00:14:41.162 "seek_hole": false, 00:14:41.162 "seek_data": false, 00:14:41.162 "copy": true, 00:14:41.162 "nvme_iov_md": false 00:14:41.162 }, 00:14:41.162 "memory_domains": [ 00:14:41.162 { 00:14:41.162 "dma_device_id": "system", 00:14:41.162 "dma_device_type": 1 00:14:41.162 }, 00:14:41.162 { 00:14:41.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.162 "dma_device_type": 2 00:14:41.162 } 00:14:41.162 ], 00:14:41.162 "driver_specific": {} 00:14:41.162 } 00:14:41.162 ] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.162 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.162 "name": "Existed_Raid", 00:14:41.162 "uuid": "80792b9f-905d-4dd6-9bd1-57eb2c8849e1", 00:14:41.162 "strip_size_kb": 64, 00:14:41.162 "state": "online", 00:14:41.162 "raid_level": "raid5f", 00:14:41.162 "superblock": false, 00:14:41.162 "num_base_bdevs": 3, 00:14:41.162 "num_base_bdevs_discovered": 3, 00:14:41.162 "num_base_bdevs_operational": 3, 00:14:41.162 "base_bdevs_list": [ 00:14:41.162 { 00:14:41.162 "name": "NewBaseBdev", 00:14:41.162 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:41.162 "is_configured": true, 00:14:41.162 "data_offset": 0, 00:14:41.162 "data_size": 65536 00:14:41.162 }, 00:14:41.162 { 00:14:41.162 "name": "BaseBdev2", 00:14:41.162 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:41.162 "is_configured": true, 00:14:41.162 "data_offset": 0, 00:14:41.162 "data_size": 65536 00:14:41.162 }, 00:14:41.162 { 00:14:41.162 "name": "BaseBdev3", 00:14:41.162 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:41.162 "is_configured": true, 00:14:41.162 "data_offset": 0, 00:14:41.162 "data_size": 65536 00:14:41.162 } 00:14:41.163 ] 00:14:41.163 }' 00:14:41.163 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.163 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 [2024-10-25 17:55:59.925143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.732 "name": "Existed_Raid", 00:14:41.732 "aliases": [ 00:14:41.732 "80792b9f-905d-4dd6-9bd1-57eb2c8849e1" 00:14:41.732 ], 00:14:41.732 "product_name": "Raid Volume", 00:14:41.732 "block_size": 512, 00:14:41.732 "num_blocks": 131072, 00:14:41.732 "uuid": "80792b9f-905d-4dd6-9bd1-57eb2c8849e1", 00:14:41.732 "assigned_rate_limits": { 00:14:41.732 "rw_ios_per_sec": 0, 00:14:41.732 "rw_mbytes_per_sec": 0, 00:14:41.732 "r_mbytes_per_sec": 0, 00:14:41.732 "w_mbytes_per_sec": 0 00:14:41.732 }, 00:14:41.732 "claimed": false, 00:14:41.732 "zoned": false, 00:14:41.732 "supported_io_types": { 00:14:41.732 "read": true, 00:14:41.732 "write": true, 00:14:41.732 "unmap": false, 00:14:41.732 "flush": false, 00:14:41.732 "reset": true, 00:14:41.732 "nvme_admin": false, 00:14:41.732 "nvme_io": false, 00:14:41.732 "nvme_io_md": false, 00:14:41.732 "write_zeroes": true, 00:14:41.732 "zcopy": false, 00:14:41.732 "get_zone_info": false, 00:14:41.732 "zone_management": false, 00:14:41.732 "zone_append": false, 00:14:41.732 "compare": false, 00:14:41.732 "compare_and_write": false, 00:14:41.732 "abort": false, 00:14:41.732 "seek_hole": false, 00:14:41.732 "seek_data": false, 00:14:41.732 "copy": false, 00:14:41.732 "nvme_iov_md": false 00:14:41.732 }, 00:14:41.732 "driver_specific": { 00:14:41.732 "raid": { 00:14:41.732 "uuid": "80792b9f-905d-4dd6-9bd1-57eb2c8849e1", 00:14:41.732 "strip_size_kb": 64, 00:14:41.732 "state": "online", 00:14:41.732 "raid_level": "raid5f", 00:14:41.732 "superblock": false, 00:14:41.732 "num_base_bdevs": 3, 00:14:41.732 "num_base_bdevs_discovered": 3, 00:14:41.732 "num_base_bdevs_operational": 3, 00:14:41.732 "base_bdevs_list": [ 00:14:41.732 { 00:14:41.732 "name": "NewBaseBdev", 00:14:41.732 "uuid": "1e48b7f0-5746-4ae0-aa53-2cc431c03be8", 00:14:41.732 "is_configured": true, 00:14:41.732 "data_offset": 0, 00:14:41.732 "data_size": 65536 00:14:41.732 }, 00:14:41.732 { 00:14:41.732 "name": "BaseBdev2", 00:14:41.732 "uuid": "173eaf7b-6662-4c71-bfec-786ab00ea998", 00:14:41.732 "is_configured": true, 00:14:41.732 "data_offset": 0, 00:14:41.732 "data_size": 65536 00:14:41.732 }, 00:14:41.732 { 00:14:41.732 "name": "BaseBdev3", 00:14:41.732 "uuid": "e88b6b85-764a-4a08-883b-06488966b6f6", 00:14:41.732 "is_configured": true, 00:14:41.732 "data_offset": 0, 00:14:41.732 "data_size": 65536 00:14:41.732 } 00:14:41.732 ] 00:14:41.732 } 00:14:41.732 } 00:14:41.732 }' 00:14:41.732 17:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:41.732 BaseBdev2 00:14:41.732 BaseBdev3' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.993 [2024-10-25 17:56:00.216526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.993 [2024-10-25 17:56:00.216568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.993 [2024-10-25 17:56:00.216662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.993 [2024-10-25 17:56:00.216988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.993 [2024-10-25 17:56:00.217011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79796 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79796 ']' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79796 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79796 00:14:41.993 killing process with pid 79796 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79796' 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79796 00:14:41.993 17:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79796 00:14:41.993 [2024-10-25 17:56:00.263236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.253 [2024-10-25 17:56:00.626824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.646 00:14:43.646 real 0m10.946s 00:14:43.646 user 0m17.072s 00:14:43.646 sys 0m2.004s 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.646 ************************************ 00:14:43.646 END TEST raid5f_state_function_test 00:14:43.646 ************************************ 00:14:43.646 17:56:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:43.646 17:56:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:43.646 17:56:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.646 17:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.646 ************************************ 00:14:43.646 START TEST raid5f_state_function_test_sb 00:14:43.646 ************************************ 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80417 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.646 Process raid pid: 80417 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80417' 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80417 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80417 ']' 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.646 17:56:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.646 [2024-10-25 17:56:02.064496] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:43.646 [2024-10-25 17:56:02.064630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.905 [2024-10-25 17:56:02.240929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.165 [2024-10-25 17:56:02.384773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.425 [2024-10-25 17:56:02.630815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.425 [2024-10-25 17:56:02.630874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.686 [2024-10-25 17:56:02.958340] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.686 [2024-10-25 17:56:02.958413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.686 [2024-10-25 17:56:02.958426] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.686 [2024-10-25 17:56:02.958437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.686 [2024-10-25 17:56:02.958444] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.686 [2024-10-25 17:56:02.958454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.686 17:56:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.686 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.686 "name": "Existed_Raid", 00:14:44.686 "uuid": "5ff91777-913d-403c-b447-ee5a0839047d", 00:14:44.686 "strip_size_kb": 64, 00:14:44.686 "state": "configuring", 00:14:44.686 "raid_level": "raid5f", 00:14:44.686 "superblock": true, 00:14:44.686 "num_base_bdevs": 3, 00:14:44.686 "num_base_bdevs_discovered": 0, 00:14:44.686 "num_base_bdevs_operational": 3, 00:14:44.686 "base_bdevs_list": [ 00:14:44.686 { 00:14:44.686 "name": "BaseBdev1", 00:14:44.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.686 "is_configured": false, 00:14:44.686 "data_offset": 0, 00:14:44.686 "data_size": 0 00:14:44.686 }, 00:14:44.686 { 00:14:44.686 "name": "BaseBdev2", 00:14:44.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.686 "is_configured": false, 00:14:44.686 "data_offset": 0, 00:14:44.686 "data_size": 0 00:14:44.686 }, 00:14:44.686 { 00:14:44.686 "name": "BaseBdev3", 00:14:44.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.686 "is_configured": false, 00:14:44.686 "data_offset": 0, 00:14:44.686 "data_size": 0 00:14:44.686 } 00:14:44.686 ] 00:14:44.686 }' 00:14:44.686 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.686 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.256 [2024-10-25 17:56:03.437504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.256 [2024-10-25 17:56:03.437577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.256 [2024-10-25 17:56:03.449432] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.256 [2024-10-25 17:56:03.449489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.256 [2024-10-25 17:56:03.449501] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.256 [2024-10-25 17:56:03.449512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.256 [2024-10-25 17:56:03.449519] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.256 [2024-10-25 17:56:03.449530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.256 [2024-10-25 17:56:03.504530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.256 BaseBdev1 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.256 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.257 [ 00:14:45.257 { 00:14:45.257 "name": "BaseBdev1", 00:14:45.257 "aliases": [ 00:14:45.257 "17549d5b-3cf3-4fa8-b4b7-271df75d9c98" 00:14:45.257 ], 00:14:45.257 "product_name": "Malloc disk", 00:14:45.257 "block_size": 512, 00:14:45.257 "num_blocks": 65536, 00:14:45.257 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:45.257 "assigned_rate_limits": { 00:14:45.257 "rw_ios_per_sec": 0, 00:14:45.257 "rw_mbytes_per_sec": 0, 00:14:45.257 "r_mbytes_per_sec": 0, 00:14:45.257 "w_mbytes_per_sec": 0 00:14:45.257 }, 00:14:45.257 "claimed": true, 00:14:45.257 "claim_type": "exclusive_write", 00:14:45.257 "zoned": false, 00:14:45.257 "supported_io_types": { 00:14:45.257 "read": true, 00:14:45.257 "write": true, 00:14:45.257 "unmap": true, 00:14:45.257 "flush": true, 00:14:45.257 "reset": true, 00:14:45.257 "nvme_admin": false, 00:14:45.257 "nvme_io": false, 00:14:45.257 "nvme_io_md": false, 00:14:45.257 "write_zeroes": true, 00:14:45.257 "zcopy": true, 00:14:45.257 "get_zone_info": false, 00:14:45.257 "zone_management": false, 00:14:45.257 "zone_append": false, 00:14:45.257 "compare": false, 00:14:45.257 "compare_and_write": false, 00:14:45.257 "abort": true, 00:14:45.257 "seek_hole": false, 00:14:45.257 "seek_data": false, 00:14:45.257 "copy": true, 00:14:45.257 "nvme_iov_md": false 00:14:45.257 }, 00:14:45.257 "memory_domains": [ 00:14:45.257 { 00:14:45.257 "dma_device_id": "system", 00:14:45.257 "dma_device_type": 1 00:14:45.257 }, 00:14:45.257 { 00:14:45.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.257 "dma_device_type": 2 00:14:45.257 } 00:14:45.257 ], 00:14:45.257 "driver_specific": {} 00:14:45.257 } 00:14:45.257 ] 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.257 "name": "Existed_Raid", 00:14:45.257 "uuid": "1d2586f1-155f-4eb2-b99b-6285d4a3f2cb", 00:14:45.257 "strip_size_kb": 64, 00:14:45.257 "state": "configuring", 00:14:45.257 "raid_level": "raid5f", 00:14:45.257 "superblock": true, 00:14:45.257 "num_base_bdevs": 3, 00:14:45.257 "num_base_bdevs_discovered": 1, 00:14:45.257 "num_base_bdevs_operational": 3, 00:14:45.257 "base_bdevs_list": [ 00:14:45.257 { 00:14:45.257 "name": "BaseBdev1", 00:14:45.257 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:45.257 "is_configured": true, 00:14:45.257 "data_offset": 2048, 00:14:45.257 "data_size": 63488 00:14:45.257 }, 00:14:45.257 { 00:14:45.257 "name": "BaseBdev2", 00:14:45.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.257 "is_configured": false, 00:14:45.257 "data_offset": 0, 00:14:45.257 "data_size": 0 00:14:45.257 }, 00:14:45.257 { 00:14:45.257 "name": "BaseBdev3", 00:14:45.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.257 "is_configured": false, 00:14:45.257 "data_offset": 0, 00:14:45.257 "data_size": 0 00:14:45.257 } 00:14:45.257 ] 00:14:45.257 }' 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.257 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.825 [2024-10-25 17:56:03.991881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.825 [2024-10-25 17:56:03.991962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.825 17:56:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.825 [2024-10-25 17:56:04.003904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.825 [2024-10-25 17:56:04.006188] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.825 [2024-10-25 17:56:04.006233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.825 [2024-10-25 17:56:04.006244] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.825 [2024-10-25 17:56:04.006253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.825 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.825 "name": "Existed_Raid", 00:14:45.825 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:45.825 "strip_size_kb": 64, 00:14:45.825 "state": "configuring", 00:14:45.825 "raid_level": "raid5f", 00:14:45.825 "superblock": true, 00:14:45.825 "num_base_bdevs": 3, 00:14:45.825 "num_base_bdevs_discovered": 1, 00:14:45.825 "num_base_bdevs_operational": 3, 00:14:45.825 "base_bdevs_list": [ 00:14:45.825 { 00:14:45.825 "name": "BaseBdev1", 00:14:45.825 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:45.825 "is_configured": true, 00:14:45.825 "data_offset": 2048, 00:14:45.825 "data_size": 63488 00:14:45.825 }, 00:14:45.826 { 00:14:45.826 "name": "BaseBdev2", 00:14:45.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.826 "is_configured": false, 00:14:45.826 "data_offset": 0, 00:14:45.826 "data_size": 0 00:14:45.826 }, 00:14:45.826 { 00:14:45.826 "name": "BaseBdev3", 00:14:45.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.826 "is_configured": false, 00:14:45.826 "data_offset": 0, 00:14:45.826 "data_size": 0 00:14:45.826 } 00:14:45.826 ] 00:14:45.826 }' 00:14:45.826 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.826 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 [2024-10-25 17:56:04.422780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.086 BaseBdev2 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 [ 00:14:46.086 { 00:14:46.086 "name": "BaseBdev2", 00:14:46.086 "aliases": [ 00:14:46.086 "6af5f4de-94a4-4bce-8cef-4b01ee70d830" 00:14:46.086 ], 00:14:46.086 "product_name": "Malloc disk", 00:14:46.086 "block_size": 512, 00:14:46.086 "num_blocks": 65536, 00:14:46.086 "uuid": "6af5f4de-94a4-4bce-8cef-4b01ee70d830", 00:14:46.086 "assigned_rate_limits": { 00:14:46.086 "rw_ios_per_sec": 0, 00:14:46.086 "rw_mbytes_per_sec": 0, 00:14:46.086 "r_mbytes_per_sec": 0, 00:14:46.086 "w_mbytes_per_sec": 0 00:14:46.086 }, 00:14:46.086 "claimed": true, 00:14:46.086 "claim_type": "exclusive_write", 00:14:46.086 "zoned": false, 00:14:46.086 "supported_io_types": { 00:14:46.086 "read": true, 00:14:46.086 "write": true, 00:14:46.086 "unmap": true, 00:14:46.086 "flush": true, 00:14:46.086 "reset": true, 00:14:46.086 "nvme_admin": false, 00:14:46.086 "nvme_io": false, 00:14:46.086 "nvme_io_md": false, 00:14:46.086 "write_zeroes": true, 00:14:46.086 "zcopy": true, 00:14:46.086 "get_zone_info": false, 00:14:46.086 "zone_management": false, 00:14:46.086 "zone_append": false, 00:14:46.086 "compare": false, 00:14:46.086 "compare_and_write": false, 00:14:46.086 "abort": true, 00:14:46.086 "seek_hole": false, 00:14:46.086 "seek_data": false, 00:14:46.086 "copy": true, 00:14:46.086 "nvme_iov_md": false 00:14:46.086 }, 00:14:46.086 "memory_domains": [ 00:14:46.086 { 00:14:46.086 "dma_device_id": "system", 00:14:46.086 "dma_device_type": 1 00:14:46.086 }, 00:14:46.086 { 00:14:46.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.086 "dma_device_type": 2 00:14:46.086 } 00:14:46.086 ], 00:14:46.086 "driver_specific": {} 00:14:46.086 } 00:14:46.086 ] 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.086 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.087 "name": "Existed_Raid", 00:14:46.087 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:46.087 "strip_size_kb": 64, 00:14:46.087 "state": "configuring", 00:14:46.087 "raid_level": "raid5f", 00:14:46.087 "superblock": true, 00:14:46.087 "num_base_bdevs": 3, 00:14:46.087 "num_base_bdevs_discovered": 2, 00:14:46.087 "num_base_bdevs_operational": 3, 00:14:46.087 "base_bdevs_list": [ 00:14:46.087 { 00:14:46.087 "name": "BaseBdev1", 00:14:46.087 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:46.087 "is_configured": true, 00:14:46.087 "data_offset": 2048, 00:14:46.087 "data_size": 63488 00:14:46.087 }, 00:14:46.087 { 00:14:46.087 "name": "BaseBdev2", 00:14:46.087 "uuid": "6af5f4de-94a4-4bce-8cef-4b01ee70d830", 00:14:46.087 "is_configured": true, 00:14:46.087 "data_offset": 2048, 00:14:46.087 "data_size": 63488 00:14:46.087 }, 00:14:46.087 { 00:14:46.087 "name": "BaseBdev3", 00:14:46.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.087 "is_configured": false, 00:14:46.087 "data_offset": 0, 00:14:46.087 "data_size": 0 00:14:46.087 } 00:14:46.087 ] 00:14:46.087 }' 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.087 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.658 [2024-10-25 17:56:04.979433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.658 [2024-10-25 17:56:04.979744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.658 [2024-10-25 17:56:04.979770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.658 [2024-10-25 17:56:04.980091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:46.658 BaseBdev3 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.658 [2024-10-25 17:56:04.986281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.658 [2024-10-25 17:56:04.986307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:46.658 [2024-10-25 17:56:04.986498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.658 17:56:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.658 [ 00:14:46.658 { 00:14:46.658 "name": "BaseBdev3", 00:14:46.658 "aliases": [ 00:14:46.658 "34ad462f-8fae-4fd2-86a8-b10ed0f90952" 00:14:46.658 ], 00:14:46.658 "product_name": "Malloc disk", 00:14:46.658 "block_size": 512, 00:14:46.658 "num_blocks": 65536, 00:14:46.658 "uuid": "34ad462f-8fae-4fd2-86a8-b10ed0f90952", 00:14:46.658 "assigned_rate_limits": { 00:14:46.658 "rw_ios_per_sec": 0, 00:14:46.658 "rw_mbytes_per_sec": 0, 00:14:46.658 "r_mbytes_per_sec": 0, 00:14:46.658 "w_mbytes_per_sec": 0 00:14:46.658 }, 00:14:46.658 "claimed": true, 00:14:46.658 "claim_type": "exclusive_write", 00:14:46.658 "zoned": false, 00:14:46.658 "supported_io_types": { 00:14:46.658 "read": true, 00:14:46.658 "write": true, 00:14:46.658 "unmap": true, 00:14:46.658 "flush": true, 00:14:46.658 "reset": true, 00:14:46.658 "nvme_admin": false, 00:14:46.658 "nvme_io": false, 00:14:46.658 "nvme_io_md": false, 00:14:46.658 "write_zeroes": true, 00:14:46.658 "zcopy": true, 00:14:46.658 "get_zone_info": false, 00:14:46.658 "zone_management": false, 00:14:46.658 "zone_append": false, 00:14:46.658 "compare": false, 00:14:46.658 "compare_and_write": false, 00:14:46.658 "abort": true, 00:14:46.658 "seek_hole": false, 00:14:46.658 "seek_data": false, 00:14:46.658 "copy": true, 00:14:46.658 "nvme_iov_md": false 00:14:46.658 }, 00:14:46.658 "memory_domains": [ 00:14:46.658 { 00:14:46.658 "dma_device_id": "system", 00:14:46.658 "dma_device_type": 1 00:14:46.658 }, 00:14:46.658 { 00:14:46.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.658 "dma_device_type": 2 00:14:46.658 } 00:14:46.658 ], 00:14:46.658 "driver_specific": {} 00:14:46.658 } 00:14:46.658 ] 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.658 "name": "Existed_Raid", 00:14:46.658 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:46.658 "strip_size_kb": 64, 00:14:46.658 "state": "online", 00:14:46.658 "raid_level": "raid5f", 00:14:46.658 "superblock": true, 00:14:46.658 "num_base_bdevs": 3, 00:14:46.658 "num_base_bdevs_discovered": 3, 00:14:46.658 "num_base_bdevs_operational": 3, 00:14:46.658 "base_bdevs_list": [ 00:14:46.658 { 00:14:46.658 "name": "BaseBdev1", 00:14:46.658 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:46.658 "is_configured": true, 00:14:46.658 "data_offset": 2048, 00:14:46.658 "data_size": 63488 00:14:46.658 }, 00:14:46.658 { 00:14:46.658 "name": "BaseBdev2", 00:14:46.658 "uuid": "6af5f4de-94a4-4bce-8cef-4b01ee70d830", 00:14:46.658 "is_configured": true, 00:14:46.658 "data_offset": 2048, 00:14:46.658 "data_size": 63488 00:14:46.658 }, 00:14:46.658 { 00:14:46.658 "name": "BaseBdev3", 00:14:46.658 "uuid": "34ad462f-8fae-4fd2-86a8-b10ed0f90952", 00:14:46.658 "is_configured": true, 00:14:46.658 "data_offset": 2048, 00:14:46.658 "data_size": 63488 00:14:46.658 } 00:14:46.658 ] 00:14:46.658 }' 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.658 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.238 [2024-10-25 17:56:05.521871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.238 "name": "Existed_Raid", 00:14:47.238 "aliases": [ 00:14:47.238 "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28" 00:14:47.238 ], 00:14:47.238 "product_name": "Raid Volume", 00:14:47.238 "block_size": 512, 00:14:47.238 "num_blocks": 126976, 00:14:47.238 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:47.238 "assigned_rate_limits": { 00:14:47.238 "rw_ios_per_sec": 0, 00:14:47.238 "rw_mbytes_per_sec": 0, 00:14:47.238 "r_mbytes_per_sec": 0, 00:14:47.238 "w_mbytes_per_sec": 0 00:14:47.238 }, 00:14:47.238 "claimed": false, 00:14:47.238 "zoned": false, 00:14:47.238 "supported_io_types": { 00:14:47.238 "read": true, 00:14:47.238 "write": true, 00:14:47.238 "unmap": false, 00:14:47.238 "flush": false, 00:14:47.238 "reset": true, 00:14:47.238 "nvme_admin": false, 00:14:47.238 "nvme_io": false, 00:14:47.238 "nvme_io_md": false, 00:14:47.238 "write_zeroes": true, 00:14:47.238 "zcopy": false, 00:14:47.238 "get_zone_info": false, 00:14:47.238 "zone_management": false, 00:14:47.238 "zone_append": false, 00:14:47.238 "compare": false, 00:14:47.238 "compare_and_write": false, 00:14:47.238 "abort": false, 00:14:47.238 "seek_hole": false, 00:14:47.238 "seek_data": false, 00:14:47.238 "copy": false, 00:14:47.238 "nvme_iov_md": false 00:14:47.238 }, 00:14:47.238 "driver_specific": { 00:14:47.238 "raid": { 00:14:47.238 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:47.238 "strip_size_kb": 64, 00:14:47.238 "state": "online", 00:14:47.238 "raid_level": "raid5f", 00:14:47.238 "superblock": true, 00:14:47.238 "num_base_bdevs": 3, 00:14:47.238 "num_base_bdevs_discovered": 3, 00:14:47.238 "num_base_bdevs_operational": 3, 00:14:47.238 "base_bdevs_list": [ 00:14:47.238 { 00:14:47.238 "name": "BaseBdev1", 00:14:47.238 "uuid": "17549d5b-3cf3-4fa8-b4b7-271df75d9c98", 00:14:47.238 "is_configured": true, 00:14:47.238 "data_offset": 2048, 00:14:47.238 "data_size": 63488 00:14:47.238 }, 00:14:47.238 { 00:14:47.238 "name": "BaseBdev2", 00:14:47.238 "uuid": "6af5f4de-94a4-4bce-8cef-4b01ee70d830", 00:14:47.238 "is_configured": true, 00:14:47.238 "data_offset": 2048, 00:14:47.238 "data_size": 63488 00:14:47.238 }, 00:14:47.238 { 00:14:47.238 "name": "BaseBdev3", 00:14:47.238 "uuid": "34ad462f-8fae-4fd2-86a8-b10ed0f90952", 00:14:47.238 "is_configured": true, 00:14:47.238 "data_offset": 2048, 00:14:47.238 "data_size": 63488 00:14:47.238 } 00:14:47.238 ] 00:14:47.238 } 00:14:47.238 } 00:14:47.238 }' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.238 BaseBdev2 00:14:47.238 BaseBdev3' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.238 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.511 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.511 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.511 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.511 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.511 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 [2024-10-25 17:56:05.801148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.772 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.772 "name": "Existed_Raid", 00:14:47.772 "uuid": "6a6c9ab9-8647-4bff-a8d9-ac502fdaff28", 00:14:47.772 "strip_size_kb": 64, 00:14:47.772 "state": "online", 00:14:47.772 "raid_level": "raid5f", 00:14:47.772 "superblock": true, 00:14:47.772 "num_base_bdevs": 3, 00:14:47.772 "num_base_bdevs_discovered": 2, 00:14:47.772 "num_base_bdevs_operational": 2, 00:14:47.772 "base_bdevs_list": [ 00:14:47.772 { 00:14:47.772 "name": null, 00:14:47.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.772 "is_configured": false, 00:14:47.772 "data_offset": 0, 00:14:47.772 "data_size": 63488 00:14:47.772 }, 00:14:47.772 { 00:14:47.772 "name": "BaseBdev2", 00:14:47.772 "uuid": "6af5f4de-94a4-4bce-8cef-4b01ee70d830", 00:14:47.772 "is_configured": true, 00:14:47.772 "data_offset": 2048, 00:14:47.772 "data_size": 63488 00:14:47.772 }, 00:14:47.772 { 00:14:47.772 "name": "BaseBdev3", 00:14:47.772 "uuid": "34ad462f-8fae-4fd2-86a8-b10ed0f90952", 00:14:47.772 "is_configured": true, 00:14:47.772 "data_offset": 2048, 00:14:47.772 "data_size": 63488 00:14:47.772 } 00:14:47.772 ] 00:14:47.772 }' 00:14:47.772 17:56:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.772 17:56:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.032 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.032 [2024-10-25 17:56:06.390672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.032 [2024-10-25 17:56:06.390874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.294 [2024-10-25 17:56:06.499443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.294 [2024-10-25 17:56:06.559418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.294 [2024-10-25 17:56:06.559491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.294 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.554 BaseBdev2 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.554 [ 00:14:48.554 { 00:14:48.554 "name": "BaseBdev2", 00:14:48.554 "aliases": [ 00:14:48.554 "c8934d8a-d752-4c49-8069-f31baaa35cd1" 00:14:48.554 ], 00:14:48.554 "product_name": "Malloc disk", 00:14:48.554 "block_size": 512, 00:14:48.554 "num_blocks": 65536, 00:14:48.554 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:48.554 "assigned_rate_limits": { 00:14:48.554 "rw_ios_per_sec": 0, 00:14:48.554 "rw_mbytes_per_sec": 0, 00:14:48.554 "r_mbytes_per_sec": 0, 00:14:48.554 "w_mbytes_per_sec": 0 00:14:48.554 }, 00:14:48.554 "claimed": false, 00:14:48.554 "zoned": false, 00:14:48.554 "supported_io_types": { 00:14:48.554 "read": true, 00:14:48.554 "write": true, 00:14:48.554 "unmap": true, 00:14:48.554 "flush": true, 00:14:48.554 "reset": true, 00:14:48.554 "nvme_admin": false, 00:14:48.554 "nvme_io": false, 00:14:48.554 "nvme_io_md": false, 00:14:48.554 "write_zeroes": true, 00:14:48.554 "zcopy": true, 00:14:48.554 "get_zone_info": false, 00:14:48.554 "zone_management": false, 00:14:48.554 "zone_append": false, 00:14:48.554 "compare": false, 00:14:48.554 "compare_and_write": false, 00:14:48.554 "abort": true, 00:14:48.554 "seek_hole": false, 00:14:48.554 "seek_data": false, 00:14:48.554 "copy": true, 00:14:48.554 "nvme_iov_md": false 00:14:48.554 }, 00:14:48.554 "memory_domains": [ 00:14:48.554 { 00:14:48.554 "dma_device_id": "system", 00:14:48.554 "dma_device_type": 1 00:14:48.554 }, 00:14:48.554 { 00:14:48.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.554 "dma_device_type": 2 00:14:48.554 } 00:14:48.554 ], 00:14:48.554 "driver_specific": {} 00:14:48.554 } 00:14:48.554 ] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.554 BaseBdev3 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.554 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.555 [ 00:14:48.555 { 00:14:48.555 "name": "BaseBdev3", 00:14:48.555 "aliases": [ 00:14:48.555 "2302f36d-4253-4f9d-bd64-f841476806b7" 00:14:48.555 ], 00:14:48.555 "product_name": "Malloc disk", 00:14:48.555 "block_size": 512, 00:14:48.555 "num_blocks": 65536, 00:14:48.555 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:48.555 "assigned_rate_limits": { 00:14:48.555 "rw_ios_per_sec": 0, 00:14:48.555 "rw_mbytes_per_sec": 0, 00:14:48.555 "r_mbytes_per_sec": 0, 00:14:48.555 "w_mbytes_per_sec": 0 00:14:48.555 }, 00:14:48.555 "claimed": false, 00:14:48.555 "zoned": false, 00:14:48.555 "supported_io_types": { 00:14:48.555 "read": true, 00:14:48.555 "write": true, 00:14:48.555 "unmap": true, 00:14:48.555 "flush": true, 00:14:48.555 "reset": true, 00:14:48.555 "nvme_admin": false, 00:14:48.555 "nvme_io": false, 00:14:48.555 "nvme_io_md": false, 00:14:48.555 "write_zeroes": true, 00:14:48.555 "zcopy": true, 00:14:48.555 "get_zone_info": false, 00:14:48.555 "zone_management": false, 00:14:48.555 "zone_append": false, 00:14:48.555 "compare": false, 00:14:48.555 "compare_and_write": false, 00:14:48.555 "abort": true, 00:14:48.555 "seek_hole": false, 00:14:48.555 "seek_data": false, 00:14:48.555 "copy": true, 00:14:48.555 "nvme_iov_md": false 00:14:48.555 }, 00:14:48.555 "memory_domains": [ 00:14:48.555 { 00:14:48.555 "dma_device_id": "system", 00:14:48.555 "dma_device_type": 1 00:14:48.555 }, 00:14:48.555 { 00:14:48.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.555 "dma_device_type": 2 00:14:48.555 } 00:14:48.555 ], 00:14:48.555 "driver_specific": {} 00:14:48.555 } 00:14:48.555 ] 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.555 [2024-10-25 17:56:06.894604] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.555 [2024-10-25 17:56:06.894666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.555 [2024-10-25 17:56:06.894691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.555 [2024-10-25 17:56:06.897308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.555 "name": "Existed_Raid", 00:14:48.555 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:48.555 "strip_size_kb": 64, 00:14:48.555 "state": "configuring", 00:14:48.555 "raid_level": "raid5f", 00:14:48.555 "superblock": true, 00:14:48.555 "num_base_bdevs": 3, 00:14:48.555 "num_base_bdevs_discovered": 2, 00:14:48.555 "num_base_bdevs_operational": 3, 00:14:48.555 "base_bdevs_list": [ 00:14:48.555 { 00:14:48.555 "name": "BaseBdev1", 00:14:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.555 "is_configured": false, 00:14:48.555 "data_offset": 0, 00:14:48.555 "data_size": 0 00:14:48.555 }, 00:14:48.555 { 00:14:48.555 "name": "BaseBdev2", 00:14:48.555 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:48.555 "is_configured": true, 00:14:48.555 "data_offset": 2048, 00:14:48.555 "data_size": 63488 00:14:48.555 }, 00:14:48.555 { 00:14:48.555 "name": "BaseBdev3", 00:14:48.555 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:48.555 "is_configured": true, 00:14:48.555 "data_offset": 2048, 00:14:48.555 "data_size": 63488 00:14:48.555 } 00:14:48.555 ] 00:14:48.555 }' 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.555 17:56:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.126 [2024-10-25 17:56:07.365815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.126 "name": "Existed_Raid", 00:14:49.126 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:49.126 "strip_size_kb": 64, 00:14:49.126 "state": "configuring", 00:14:49.126 "raid_level": "raid5f", 00:14:49.126 "superblock": true, 00:14:49.126 "num_base_bdevs": 3, 00:14:49.126 "num_base_bdevs_discovered": 1, 00:14:49.126 "num_base_bdevs_operational": 3, 00:14:49.126 "base_bdevs_list": [ 00:14:49.126 { 00:14:49.126 "name": "BaseBdev1", 00:14:49.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.126 "is_configured": false, 00:14:49.126 "data_offset": 0, 00:14:49.126 "data_size": 0 00:14:49.126 }, 00:14:49.126 { 00:14:49.126 "name": null, 00:14:49.126 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:49.126 "is_configured": false, 00:14:49.126 "data_offset": 0, 00:14:49.126 "data_size": 63488 00:14:49.126 }, 00:14:49.126 { 00:14:49.126 "name": "BaseBdev3", 00:14:49.126 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:49.126 "is_configured": true, 00:14:49.126 "data_offset": 2048, 00:14:49.126 "data_size": 63488 00:14:49.126 } 00:14:49.126 ] 00:14:49.126 }' 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.126 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.386 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.386 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.386 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.386 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.386 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.645 [2024-10-25 17:56:07.897072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.645 BaseBdev1 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.645 [ 00:14:49.645 { 00:14:49.645 "name": "BaseBdev1", 00:14:49.645 "aliases": [ 00:14:49.645 "fd189913-075c-43f2-b916-f45e14befc73" 00:14:49.645 ], 00:14:49.645 "product_name": "Malloc disk", 00:14:49.645 "block_size": 512, 00:14:49.645 "num_blocks": 65536, 00:14:49.645 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:49.645 "assigned_rate_limits": { 00:14:49.645 "rw_ios_per_sec": 0, 00:14:49.645 "rw_mbytes_per_sec": 0, 00:14:49.645 "r_mbytes_per_sec": 0, 00:14:49.645 "w_mbytes_per_sec": 0 00:14:49.645 }, 00:14:49.645 "claimed": true, 00:14:49.645 "claim_type": "exclusive_write", 00:14:49.645 "zoned": false, 00:14:49.645 "supported_io_types": { 00:14:49.645 "read": true, 00:14:49.645 "write": true, 00:14:49.645 "unmap": true, 00:14:49.645 "flush": true, 00:14:49.645 "reset": true, 00:14:49.645 "nvme_admin": false, 00:14:49.645 "nvme_io": false, 00:14:49.645 "nvme_io_md": false, 00:14:49.645 "write_zeroes": true, 00:14:49.645 "zcopy": true, 00:14:49.645 "get_zone_info": false, 00:14:49.645 "zone_management": false, 00:14:49.645 "zone_append": false, 00:14:49.645 "compare": false, 00:14:49.645 "compare_and_write": false, 00:14:49.645 "abort": true, 00:14:49.645 "seek_hole": false, 00:14:49.645 "seek_data": false, 00:14:49.645 "copy": true, 00:14:49.645 "nvme_iov_md": false 00:14:49.645 }, 00:14:49.645 "memory_domains": [ 00:14:49.645 { 00:14:49.645 "dma_device_id": "system", 00:14:49.645 "dma_device_type": 1 00:14:49.645 }, 00:14:49.645 { 00:14:49.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.645 "dma_device_type": 2 00:14:49.645 } 00:14:49.645 ], 00:14:49.645 "driver_specific": {} 00:14:49.645 } 00:14:49.645 ] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.645 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.646 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.646 "name": "Existed_Raid", 00:14:49.646 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:49.646 "strip_size_kb": 64, 00:14:49.646 "state": "configuring", 00:14:49.646 "raid_level": "raid5f", 00:14:49.646 "superblock": true, 00:14:49.646 "num_base_bdevs": 3, 00:14:49.646 "num_base_bdevs_discovered": 2, 00:14:49.646 "num_base_bdevs_operational": 3, 00:14:49.646 "base_bdevs_list": [ 00:14:49.646 { 00:14:49.646 "name": "BaseBdev1", 00:14:49.646 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:49.646 "is_configured": true, 00:14:49.646 "data_offset": 2048, 00:14:49.646 "data_size": 63488 00:14:49.646 }, 00:14:49.646 { 00:14:49.646 "name": null, 00:14:49.646 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:49.646 "is_configured": false, 00:14:49.646 "data_offset": 0, 00:14:49.646 "data_size": 63488 00:14:49.646 }, 00:14:49.646 { 00:14:49.646 "name": "BaseBdev3", 00:14:49.646 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:49.646 "is_configured": true, 00:14:49.646 "data_offset": 2048, 00:14:49.646 "data_size": 63488 00:14:49.646 } 00:14:49.646 ] 00:14:49.646 }' 00:14:49.646 17:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.646 17:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.215 [2024-10-25 17:56:08.448522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.215 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.216 "name": "Existed_Raid", 00:14:50.216 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:50.216 "strip_size_kb": 64, 00:14:50.216 "state": "configuring", 00:14:50.216 "raid_level": "raid5f", 00:14:50.216 "superblock": true, 00:14:50.216 "num_base_bdevs": 3, 00:14:50.216 "num_base_bdevs_discovered": 1, 00:14:50.216 "num_base_bdevs_operational": 3, 00:14:50.216 "base_bdevs_list": [ 00:14:50.216 { 00:14:50.216 "name": "BaseBdev1", 00:14:50.216 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:50.216 "is_configured": true, 00:14:50.216 "data_offset": 2048, 00:14:50.216 "data_size": 63488 00:14:50.216 }, 00:14:50.216 { 00:14:50.216 "name": null, 00:14:50.216 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:50.216 "is_configured": false, 00:14:50.216 "data_offset": 0, 00:14:50.216 "data_size": 63488 00:14:50.216 }, 00:14:50.216 { 00:14:50.216 "name": null, 00:14:50.216 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:50.216 "is_configured": false, 00:14:50.216 "data_offset": 0, 00:14:50.216 "data_size": 63488 00:14:50.216 } 00:14:50.216 ] 00:14:50.216 }' 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.216 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.785 [2024-10-25 17:56:08.980611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.785 17:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.785 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.785 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.785 "name": "Existed_Raid", 00:14:50.785 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:50.785 "strip_size_kb": 64, 00:14:50.785 "state": "configuring", 00:14:50.785 "raid_level": "raid5f", 00:14:50.785 "superblock": true, 00:14:50.785 "num_base_bdevs": 3, 00:14:50.785 "num_base_bdevs_discovered": 2, 00:14:50.785 "num_base_bdevs_operational": 3, 00:14:50.785 "base_bdevs_list": [ 00:14:50.785 { 00:14:50.785 "name": "BaseBdev1", 00:14:50.785 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:50.785 "is_configured": true, 00:14:50.785 "data_offset": 2048, 00:14:50.785 "data_size": 63488 00:14:50.785 }, 00:14:50.785 { 00:14:50.785 "name": null, 00:14:50.785 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:50.785 "is_configured": false, 00:14:50.785 "data_offset": 0, 00:14:50.785 "data_size": 63488 00:14:50.785 }, 00:14:50.785 { 00:14:50.785 "name": "BaseBdev3", 00:14:50.785 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:50.785 "is_configured": true, 00:14:50.785 "data_offset": 2048, 00:14:50.785 "data_size": 63488 00:14:50.785 } 00:14:50.785 ] 00:14:50.785 }' 00:14:50.785 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.785 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.045 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.045 [2024-10-25 17:56:09.400585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.304 "name": "Existed_Raid", 00:14:51.304 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:51.304 "strip_size_kb": 64, 00:14:51.304 "state": "configuring", 00:14:51.304 "raid_level": "raid5f", 00:14:51.304 "superblock": true, 00:14:51.304 "num_base_bdevs": 3, 00:14:51.304 "num_base_bdevs_discovered": 1, 00:14:51.304 "num_base_bdevs_operational": 3, 00:14:51.304 "base_bdevs_list": [ 00:14:51.304 { 00:14:51.304 "name": null, 00:14:51.304 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:51.304 "is_configured": false, 00:14:51.304 "data_offset": 0, 00:14:51.304 "data_size": 63488 00:14:51.304 }, 00:14:51.304 { 00:14:51.304 "name": null, 00:14:51.304 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:51.304 "is_configured": false, 00:14:51.304 "data_offset": 0, 00:14:51.304 "data_size": 63488 00:14:51.304 }, 00:14:51.304 { 00:14:51.304 "name": "BaseBdev3", 00:14:51.304 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:51.304 "is_configured": true, 00:14:51.304 "data_offset": 2048, 00:14:51.304 "data_size": 63488 00:14:51.304 } 00:14:51.304 ] 00:14:51.304 }' 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.304 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.563 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.563 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.563 17:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.563 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.563 17:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.822 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:51.822 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.823 [2024-10-25 17:56:10.012531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.823 "name": "Existed_Raid", 00:14:51.823 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:51.823 "strip_size_kb": 64, 00:14:51.823 "state": "configuring", 00:14:51.823 "raid_level": "raid5f", 00:14:51.823 "superblock": true, 00:14:51.823 "num_base_bdevs": 3, 00:14:51.823 "num_base_bdevs_discovered": 2, 00:14:51.823 "num_base_bdevs_operational": 3, 00:14:51.823 "base_bdevs_list": [ 00:14:51.823 { 00:14:51.823 "name": null, 00:14:51.823 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:51.823 "is_configured": false, 00:14:51.823 "data_offset": 0, 00:14:51.823 "data_size": 63488 00:14:51.823 }, 00:14:51.823 { 00:14:51.823 "name": "BaseBdev2", 00:14:51.823 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:51.823 "is_configured": true, 00:14:51.823 "data_offset": 2048, 00:14:51.823 "data_size": 63488 00:14:51.823 }, 00:14:51.823 { 00:14:51.823 "name": "BaseBdev3", 00:14:51.823 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:51.823 "is_configured": true, 00:14:51.823 "data_offset": 2048, 00:14:51.823 "data_size": 63488 00:14:51.823 } 00:14:51.823 ] 00:14:51.823 }' 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.823 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.082 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd189913-075c-43f2-b916-f45e14befc73 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.341 [2024-10-25 17:56:10.594302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.341 [2024-10-25 17:56:10.594595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:52.341 [2024-10-25 17:56:10.594621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.341 [2024-10-25 17:56:10.594956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:52.341 NewBaseBdev 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.341 [2024-10-25 17:56:10.601696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:52.341 [2024-10-25 17:56:10.601722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:52.341 [2024-10-25 17:56:10.602058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.341 [ 00:14:52.341 { 00:14:52.341 "name": "NewBaseBdev", 00:14:52.341 "aliases": [ 00:14:52.341 "fd189913-075c-43f2-b916-f45e14befc73" 00:14:52.341 ], 00:14:52.341 "product_name": "Malloc disk", 00:14:52.341 "block_size": 512, 00:14:52.341 "num_blocks": 65536, 00:14:52.341 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:52.341 "assigned_rate_limits": { 00:14:52.341 "rw_ios_per_sec": 0, 00:14:52.341 "rw_mbytes_per_sec": 0, 00:14:52.341 "r_mbytes_per_sec": 0, 00:14:52.341 "w_mbytes_per_sec": 0 00:14:52.341 }, 00:14:52.341 "claimed": true, 00:14:52.341 "claim_type": "exclusive_write", 00:14:52.341 "zoned": false, 00:14:52.341 "supported_io_types": { 00:14:52.341 "read": true, 00:14:52.341 "write": true, 00:14:52.341 "unmap": true, 00:14:52.341 "flush": true, 00:14:52.341 "reset": true, 00:14:52.341 "nvme_admin": false, 00:14:52.341 "nvme_io": false, 00:14:52.341 "nvme_io_md": false, 00:14:52.341 "write_zeroes": true, 00:14:52.341 "zcopy": true, 00:14:52.341 "get_zone_info": false, 00:14:52.341 "zone_management": false, 00:14:52.341 "zone_append": false, 00:14:52.341 "compare": false, 00:14:52.341 "compare_and_write": false, 00:14:52.341 "abort": true, 00:14:52.341 "seek_hole": false, 00:14:52.341 "seek_data": false, 00:14:52.341 "copy": true, 00:14:52.341 "nvme_iov_md": false 00:14:52.341 }, 00:14:52.341 "memory_domains": [ 00:14:52.341 { 00:14:52.341 "dma_device_id": "system", 00:14:52.341 "dma_device_type": 1 00:14:52.341 }, 00:14:52.341 { 00:14:52.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.341 "dma_device_type": 2 00:14:52.341 } 00:14:52.341 ], 00:14:52.341 "driver_specific": {} 00:14:52.341 } 00:14:52.341 ] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.341 "name": "Existed_Raid", 00:14:52.341 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:52.341 "strip_size_kb": 64, 00:14:52.341 "state": "online", 00:14:52.341 "raid_level": "raid5f", 00:14:52.341 "superblock": true, 00:14:52.341 "num_base_bdevs": 3, 00:14:52.341 "num_base_bdevs_discovered": 3, 00:14:52.341 "num_base_bdevs_operational": 3, 00:14:52.341 "base_bdevs_list": [ 00:14:52.341 { 00:14:52.341 "name": "NewBaseBdev", 00:14:52.341 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:52.341 "is_configured": true, 00:14:52.341 "data_offset": 2048, 00:14:52.341 "data_size": 63488 00:14:52.341 }, 00:14:52.341 { 00:14:52.341 "name": "BaseBdev2", 00:14:52.341 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:52.341 "is_configured": true, 00:14:52.341 "data_offset": 2048, 00:14:52.341 "data_size": 63488 00:14:52.341 }, 00:14:52.341 { 00:14:52.341 "name": "BaseBdev3", 00:14:52.341 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:52.341 "is_configured": true, 00:14:52.341 "data_offset": 2048, 00:14:52.341 "data_size": 63488 00:14:52.341 } 00:14:52.341 ] 00:14:52.341 }' 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.341 17:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.906 [2024-10-25 17:56:11.069439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.906 "name": "Existed_Raid", 00:14:52.906 "aliases": [ 00:14:52.906 "efc49da5-e1f6-4c91-8696-55f30fd48aee" 00:14:52.906 ], 00:14:52.906 "product_name": "Raid Volume", 00:14:52.906 "block_size": 512, 00:14:52.906 "num_blocks": 126976, 00:14:52.906 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:52.906 "assigned_rate_limits": { 00:14:52.906 "rw_ios_per_sec": 0, 00:14:52.906 "rw_mbytes_per_sec": 0, 00:14:52.906 "r_mbytes_per_sec": 0, 00:14:52.906 "w_mbytes_per_sec": 0 00:14:52.906 }, 00:14:52.906 "claimed": false, 00:14:52.906 "zoned": false, 00:14:52.906 "supported_io_types": { 00:14:52.906 "read": true, 00:14:52.906 "write": true, 00:14:52.906 "unmap": false, 00:14:52.906 "flush": false, 00:14:52.906 "reset": true, 00:14:52.906 "nvme_admin": false, 00:14:52.906 "nvme_io": false, 00:14:52.906 "nvme_io_md": false, 00:14:52.906 "write_zeroes": true, 00:14:52.906 "zcopy": false, 00:14:52.906 "get_zone_info": false, 00:14:52.906 "zone_management": false, 00:14:52.906 "zone_append": false, 00:14:52.906 "compare": false, 00:14:52.906 "compare_and_write": false, 00:14:52.906 "abort": false, 00:14:52.906 "seek_hole": false, 00:14:52.906 "seek_data": false, 00:14:52.906 "copy": false, 00:14:52.906 "nvme_iov_md": false 00:14:52.906 }, 00:14:52.906 "driver_specific": { 00:14:52.906 "raid": { 00:14:52.906 "uuid": "efc49da5-e1f6-4c91-8696-55f30fd48aee", 00:14:52.906 "strip_size_kb": 64, 00:14:52.906 "state": "online", 00:14:52.906 "raid_level": "raid5f", 00:14:52.906 "superblock": true, 00:14:52.906 "num_base_bdevs": 3, 00:14:52.906 "num_base_bdevs_discovered": 3, 00:14:52.906 "num_base_bdevs_operational": 3, 00:14:52.906 "base_bdevs_list": [ 00:14:52.906 { 00:14:52.906 "name": "NewBaseBdev", 00:14:52.906 "uuid": "fd189913-075c-43f2-b916-f45e14befc73", 00:14:52.906 "is_configured": true, 00:14:52.906 "data_offset": 2048, 00:14:52.906 "data_size": 63488 00:14:52.906 }, 00:14:52.906 { 00:14:52.906 "name": "BaseBdev2", 00:14:52.906 "uuid": "c8934d8a-d752-4c49-8069-f31baaa35cd1", 00:14:52.906 "is_configured": true, 00:14:52.906 "data_offset": 2048, 00:14:52.906 "data_size": 63488 00:14:52.906 }, 00:14:52.906 { 00:14:52.906 "name": "BaseBdev3", 00:14:52.906 "uuid": "2302f36d-4253-4f9d-bd64-f841476806b7", 00:14:52.906 "is_configured": true, 00:14:52.906 "data_offset": 2048, 00:14:52.906 "data_size": 63488 00:14:52.906 } 00:14:52.906 ] 00:14:52.906 } 00:14:52.906 } 00:14:52.906 }' 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:52.906 BaseBdev2 00:14:52.906 BaseBdev3' 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.906 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.164 [2024-10-25 17:56:11.348714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.164 [2024-10-25 17:56:11.348762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.164 [2024-10-25 17:56:11.348870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.164 [2024-10-25 17:56:11.349198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.164 [2024-10-25 17:56:11.349222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80417 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80417 ']' 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80417 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80417 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.164 killing process with pid 80417 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80417' 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80417 00:14:53.164 17:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80417 00:14:53.164 [2024-10-25 17:56:11.381238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.439 [2024-10-25 17:56:11.723773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.815 17:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:54.815 00:14:54.815 real 0m10.934s 00:14:54.815 user 0m17.135s 00:14:54.815 sys 0m2.088s 00:14:54.815 17:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.815 17:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.815 ************************************ 00:14:54.815 END TEST raid5f_state_function_test_sb 00:14:54.815 ************************************ 00:14:54.815 17:56:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:54.815 17:56:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:54.815 17:56:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.815 17:56:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.815 ************************************ 00:14:54.815 START TEST raid5f_superblock_test 00:14:54.815 ************************************ 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81043 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81043 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81043 ']' 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.815 17:56:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.815 [2024-10-25 17:56:13.065437] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:54.815 [2024-10-25 17:56:13.065589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81043 ] 00:14:54.815 [2024-10-25 17:56:13.242165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.122 [2024-10-25 17:56:13.355169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.381 [2024-10-25 17:56:13.558586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.381 [2024-10-25 17:56:13.558660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.641 malloc1 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.641 [2024-10-25 17:56:13.965204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.641 [2024-10-25 17:56:13.965298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.641 [2024-10-25 17:56:13.965328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.641 [2024-10-25 17:56:13.965342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.641 [2024-10-25 17:56:13.967566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.641 [2024-10-25 17:56:13.967609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.641 pt1 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.641 17:56:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.641 malloc2 00:14:55.641 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.641 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.641 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.641 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.641 [2024-10-25 17:56:14.021070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.641 [2024-10-25 17:56:14.021137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.641 [2024-10-25 17:56:14.021166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.641 [2024-10-25 17:56:14.021180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.641 [2024-10-25 17:56:14.023447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.642 [2024-10-25 17:56:14.023486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.642 pt2 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.642 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.901 malloc3 00:14:55.901 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.901 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:55.901 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.901 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.901 [2024-10-25 17:56:14.089190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:55.901 [2024-10-25 17:56:14.089267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.901 [2024-10-25 17:56:14.089291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.901 [2024-10-25 17:56:14.089304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.902 [2024-10-25 17:56:14.091412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.902 [2024-10-25 17:56:14.091452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:55.902 pt3 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.902 [2024-10-25 17:56:14.101233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.902 [2024-10-25 17:56:14.103105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.902 [2024-10-25 17:56:14.103180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:55.902 [2024-10-25 17:56:14.103363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.902 [2024-10-25 17:56:14.103389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:55.902 [2024-10-25 17:56:14.103637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:55.902 [2024-10-25 17:56:14.109210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.902 [2024-10-25 17:56:14.109238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.902 [2024-10-25 17:56:14.109448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.902 "name": "raid_bdev1", 00:14:55.902 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:55.902 "strip_size_kb": 64, 00:14:55.902 "state": "online", 00:14:55.902 "raid_level": "raid5f", 00:14:55.902 "superblock": true, 00:14:55.902 "num_base_bdevs": 3, 00:14:55.902 "num_base_bdevs_discovered": 3, 00:14:55.902 "num_base_bdevs_operational": 3, 00:14:55.902 "base_bdevs_list": [ 00:14:55.902 { 00:14:55.902 "name": "pt1", 00:14:55.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.902 "is_configured": true, 00:14:55.902 "data_offset": 2048, 00:14:55.902 "data_size": 63488 00:14:55.902 }, 00:14:55.902 { 00:14:55.902 "name": "pt2", 00:14:55.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.902 "is_configured": true, 00:14:55.902 "data_offset": 2048, 00:14:55.902 "data_size": 63488 00:14:55.902 }, 00:14:55.902 { 00:14:55.902 "name": "pt3", 00:14:55.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.902 "is_configured": true, 00:14:55.902 "data_offset": 2048, 00:14:55.902 "data_size": 63488 00:14:55.902 } 00:14:55.902 ] 00:14:55.902 }' 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.902 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.161 [2024-10-25 17:56:14.479663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.161 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.161 "name": "raid_bdev1", 00:14:56.161 "aliases": [ 00:14:56.161 "33e18edf-49dc-43bd-8032-484febd2336e" 00:14:56.161 ], 00:14:56.161 "product_name": "Raid Volume", 00:14:56.161 "block_size": 512, 00:14:56.161 "num_blocks": 126976, 00:14:56.161 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:56.161 "assigned_rate_limits": { 00:14:56.161 "rw_ios_per_sec": 0, 00:14:56.161 "rw_mbytes_per_sec": 0, 00:14:56.161 "r_mbytes_per_sec": 0, 00:14:56.161 "w_mbytes_per_sec": 0 00:14:56.161 }, 00:14:56.161 "claimed": false, 00:14:56.161 "zoned": false, 00:14:56.161 "supported_io_types": { 00:14:56.161 "read": true, 00:14:56.161 "write": true, 00:14:56.161 "unmap": false, 00:14:56.161 "flush": false, 00:14:56.161 "reset": true, 00:14:56.161 "nvme_admin": false, 00:14:56.161 "nvme_io": false, 00:14:56.161 "nvme_io_md": false, 00:14:56.162 "write_zeroes": true, 00:14:56.162 "zcopy": false, 00:14:56.162 "get_zone_info": false, 00:14:56.162 "zone_management": false, 00:14:56.162 "zone_append": false, 00:14:56.162 "compare": false, 00:14:56.162 "compare_and_write": false, 00:14:56.162 "abort": false, 00:14:56.162 "seek_hole": false, 00:14:56.162 "seek_data": false, 00:14:56.162 "copy": false, 00:14:56.162 "nvme_iov_md": false 00:14:56.162 }, 00:14:56.162 "driver_specific": { 00:14:56.162 "raid": { 00:14:56.162 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:56.162 "strip_size_kb": 64, 00:14:56.162 "state": "online", 00:14:56.162 "raid_level": "raid5f", 00:14:56.162 "superblock": true, 00:14:56.162 "num_base_bdevs": 3, 00:14:56.162 "num_base_bdevs_discovered": 3, 00:14:56.162 "num_base_bdevs_operational": 3, 00:14:56.162 "base_bdevs_list": [ 00:14:56.162 { 00:14:56.162 "name": "pt1", 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.162 "is_configured": true, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 }, 00:14:56.162 { 00:14:56.162 "name": "pt2", 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.162 "is_configured": true, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 }, 00:14:56.162 { 00:14:56.162 "name": "pt3", 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.162 "is_configured": true, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 } 00:14:56.162 ] 00:14:56.162 } 00:14:56.162 } 00:14:56.162 }' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:56.162 pt2 00:14:56.162 pt3' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.421 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 [2024-10-25 17:56:14.727193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=33e18edf-49dc-43bd-8032-484febd2336e 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 33e18edf-49dc-43bd-8032-484febd2336e ']' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 [2024-10-25 17:56:14.770949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.422 [2024-10-25 17:56:14.770985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.422 [2024-10-25 17:56:14.771072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.422 [2024-10-25 17:56:14.771154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.422 [2024-10-25 17:56:14.771166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.422 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.682 [2024-10-25 17:56:14.902778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:56.682 [2024-10-25 17:56:14.904986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:56.682 [2024-10-25 17:56:14.905060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:56.682 [2024-10-25 17:56:14.905124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:56.682 [2024-10-25 17:56:14.905205] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:56.682 [2024-10-25 17:56:14.905233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:56.682 [2024-10-25 17:56:14.905258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.682 [2024-10-25 17:56:14.905271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:56.682 request: 00:14:56.682 { 00:14:56.682 "name": "raid_bdev1", 00:14:56.682 "raid_level": "raid5f", 00:14:56.682 "base_bdevs": [ 00:14:56.682 "malloc1", 00:14:56.682 "malloc2", 00:14:56.682 "malloc3" 00:14:56.682 ], 00:14:56.682 "strip_size_kb": 64, 00:14:56.682 "superblock": false, 00:14:56.682 "method": "bdev_raid_create", 00:14:56.682 "req_id": 1 00:14:56.682 } 00:14:56.682 Got JSON-RPC error response 00:14:56.682 response: 00:14:56.682 { 00:14:56.682 "code": -17, 00:14:56.682 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:56.682 } 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.682 [2024-10-25 17:56:14.962611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.682 [2024-10-25 17:56:14.962681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.682 [2024-10-25 17:56:14.962706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:56.682 [2024-10-25 17:56:14.962718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.682 [2024-10-25 17:56:14.965171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.682 [2024-10-25 17:56:14.965219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.682 [2024-10-25 17:56:14.965321] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:56.682 [2024-10-25 17:56:14.965385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.682 pt1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.682 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.683 17:56:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.683 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.683 "name": "raid_bdev1", 00:14:56.683 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:56.683 "strip_size_kb": 64, 00:14:56.683 "state": "configuring", 00:14:56.683 "raid_level": "raid5f", 00:14:56.683 "superblock": true, 00:14:56.683 "num_base_bdevs": 3, 00:14:56.683 "num_base_bdevs_discovered": 1, 00:14:56.683 "num_base_bdevs_operational": 3, 00:14:56.683 "base_bdevs_list": [ 00:14:56.683 { 00:14:56.683 "name": "pt1", 00:14:56.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.683 "is_configured": true, 00:14:56.683 "data_offset": 2048, 00:14:56.683 "data_size": 63488 00:14:56.683 }, 00:14:56.683 { 00:14:56.683 "name": null, 00:14:56.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.683 "is_configured": false, 00:14:56.683 "data_offset": 2048, 00:14:56.683 "data_size": 63488 00:14:56.683 }, 00:14:56.683 { 00:14:56.683 "name": null, 00:14:56.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.683 "is_configured": false, 00:14:56.683 "data_offset": 2048, 00:14:56.683 "data_size": 63488 00:14:56.683 } 00:14:56.683 ] 00:14:56.683 }' 00:14:56.683 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.683 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 [2024-10-25 17:56:15.433929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.251 [2024-10-25 17:56:15.434026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.251 [2024-10-25 17:56:15.434053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:57.251 [2024-10-25 17:56:15.434065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.251 [2024-10-25 17:56:15.434612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.251 [2024-10-25 17:56:15.434652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.251 [2024-10-25 17:56:15.434765] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:57.251 [2024-10-25 17:56:15.434799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.251 pt2 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 [2024-10-25 17:56:15.445963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.251 "name": "raid_bdev1", 00:14:57.251 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:57.251 "strip_size_kb": 64, 00:14:57.251 "state": "configuring", 00:14:57.251 "raid_level": "raid5f", 00:14:57.251 "superblock": true, 00:14:57.251 "num_base_bdevs": 3, 00:14:57.251 "num_base_bdevs_discovered": 1, 00:14:57.251 "num_base_bdevs_operational": 3, 00:14:57.251 "base_bdevs_list": [ 00:14:57.251 { 00:14:57.251 "name": "pt1", 00:14:57.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.251 "is_configured": true, 00:14:57.251 "data_offset": 2048, 00:14:57.251 "data_size": 63488 00:14:57.251 }, 00:14:57.251 { 00:14:57.251 "name": null, 00:14:57.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.251 "is_configured": false, 00:14:57.251 "data_offset": 0, 00:14:57.251 "data_size": 63488 00:14:57.251 }, 00:14:57.251 { 00:14:57.251 "name": null, 00:14:57.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.251 "is_configured": false, 00:14:57.251 "data_offset": 2048, 00:14:57.251 "data_size": 63488 00:14:57.251 } 00:14:57.251 ] 00:14:57.251 }' 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.251 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.511 [2024-10-25 17:56:15.897089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.511 [2024-10-25 17:56:15.897175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.511 [2024-10-25 17:56:15.897197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:57.511 [2024-10-25 17:56:15.897211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.511 [2024-10-25 17:56:15.897738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.511 [2024-10-25 17:56:15.897780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.511 [2024-10-25 17:56:15.897895] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:57.511 [2024-10-25 17:56:15.897930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.511 pt2 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.511 [2024-10-25 17:56:15.905074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:57.511 [2024-10-25 17:56:15.905142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.511 [2024-10-25 17:56:15.905162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:57.511 [2024-10-25 17:56:15.905176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.511 [2024-10-25 17:56:15.905628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.511 [2024-10-25 17:56:15.905676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:57.511 [2024-10-25 17:56:15.905766] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:57.511 [2024-10-25 17:56:15.905802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:57.511 [2024-10-25 17:56:15.905979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.511 [2024-10-25 17:56:15.906002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.511 [2024-10-25 17:56:15.906264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:57.511 [2024-10-25 17:56:15.911340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.511 [2024-10-25 17:56:15.911373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:57.511 [2024-10-25 17:56:15.911616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.511 pt3 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.511 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.771 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.771 "name": "raid_bdev1", 00:14:57.771 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:57.771 "strip_size_kb": 64, 00:14:57.771 "state": "online", 00:14:57.771 "raid_level": "raid5f", 00:14:57.771 "superblock": true, 00:14:57.771 "num_base_bdevs": 3, 00:14:57.771 "num_base_bdevs_discovered": 3, 00:14:57.771 "num_base_bdevs_operational": 3, 00:14:57.771 "base_bdevs_list": [ 00:14:57.771 { 00:14:57.771 "name": "pt1", 00:14:57.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.771 "is_configured": true, 00:14:57.771 "data_offset": 2048, 00:14:57.771 "data_size": 63488 00:14:57.771 }, 00:14:57.771 { 00:14:57.771 "name": "pt2", 00:14:57.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.771 "is_configured": true, 00:14:57.771 "data_offset": 2048, 00:14:57.771 "data_size": 63488 00:14:57.771 }, 00:14:57.771 { 00:14:57.771 "name": "pt3", 00:14:57.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.771 "is_configured": true, 00:14:57.771 "data_offset": 2048, 00:14:57.771 "data_size": 63488 00:14:57.771 } 00:14:57.771 ] 00:14:57.771 }' 00:14:57.771 17:56:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.771 17:56:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.032 [2024-10-25 17:56:16.370001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.032 "name": "raid_bdev1", 00:14:58.032 "aliases": [ 00:14:58.032 "33e18edf-49dc-43bd-8032-484febd2336e" 00:14:58.032 ], 00:14:58.032 "product_name": "Raid Volume", 00:14:58.032 "block_size": 512, 00:14:58.032 "num_blocks": 126976, 00:14:58.032 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:58.032 "assigned_rate_limits": { 00:14:58.032 "rw_ios_per_sec": 0, 00:14:58.032 "rw_mbytes_per_sec": 0, 00:14:58.032 "r_mbytes_per_sec": 0, 00:14:58.032 "w_mbytes_per_sec": 0 00:14:58.032 }, 00:14:58.032 "claimed": false, 00:14:58.032 "zoned": false, 00:14:58.032 "supported_io_types": { 00:14:58.032 "read": true, 00:14:58.032 "write": true, 00:14:58.032 "unmap": false, 00:14:58.032 "flush": false, 00:14:58.032 "reset": true, 00:14:58.032 "nvme_admin": false, 00:14:58.032 "nvme_io": false, 00:14:58.032 "nvme_io_md": false, 00:14:58.032 "write_zeroes": true, 00:14:58.032 "zcopy": false, 00:14:58.032 "get_zone_info": false, 00:14:58.032 "zone_management": false, 00:14:58.032 "zone_append": false, 00:14:58.032 "compare": false, 00:14:58.032 "compare_and_write": false, 00:14:58.032 "abort": false, 00:14:58.032 "seek_hole": false, 00:14:58.032 "seek_data": false, 00:14:58.032 "copy": false, 00:14:58.032 "nvme_iov_md": false 00:14:58.032 }, 00:14:58.032 "driver_specific": { 00:14:58.032 "raid": { 00:14:58.032 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:58.032 "strip_size_kb": 64, 00:14:58.032 "state": "online", 00:14:58.032 "raid_level": "raid5f", 00:14:58.032 "superblock": true, 00:14:58.032 "num_base_bdevs": 3, 00:14:58.032 "num_base_bdevs_discovered": 3, 00:14:58.032 "num_base_bdevs_operational": 3, 00:14:58.032 "base_bdevs_list": [ 00:14:58.032 { 00:14:58.032 "name": "pt1", 00:14:58.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.032 "is_configured": true, 00:14:58.032 "data_offset": 2048, 00:14:58.032 "data_size": 63488 00:14:58.032 }, 00:14:58.032 { 00:14:58.032 "name": "pt2", 00:14:58.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.032 "is_configured": true, 00:14:58.032 "data_offset": 2048, 00:14:58.032 "data_size": 63488 00:14:58.032 }, 00:14:58.032 { 00:14:58.032 "name": "pt3", 00:14:58.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.032 "is_configured": true, 00:14:58.032 "data_offset": 2048, 00:14:58.032 "data_size": 63488 00:14:58.032 } 00:14:58.032 ] 00:14:58.032 } 00:14:58.032 } 00:14:58.032 }' 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:58.032 pt2 00:14:58.032 pt3' 00:14:58.032 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.292 [2024-10-25 17:56:16.661433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 33e18edf-49dc-43bd-8032-484febd2336e '!=' 33e18edf-49dc-43bd-8032-484febd2336e ']' 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.292 [2024-10-25 17:56:16.689235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.292 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.293 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.552 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.552 "name": "raid_bdev1", 00:14:58.552 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:58.552 "strip_size_kb": 64, 00:14:58.552 "state": "online", 00:14:58.552 "raid_level": "raid5f", 00:14:58.552 "superblock": true, 00:14:58.552 "num_base_bdevs": 3, 00:14:58.552 "num_base_bdevs_discovered": 2, 00:14:58.552 "num_base_bdevs_operational": 2, 00:14:58.552 "base_bdevs_list": [ 00:14:58.552 { 00:14:58.552 "name": null, 00:14:58.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.552 "is_configured": false, 00:14:58.552 "data_offset": 0, 00:14:58.552 "data_size": 63488 00:14:58.552 }, 00:14:58.552 { 00:14:58.552 "name": "pt2", 00:14:58.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.552 "is_configured": true, 00:14:58.552 "data_offset": 2048, 00:14:58.552 "data_size": 63488 00:14:58.552 }, 00:14:58.552 { 00:14:58.552 "name": "pt3", 00:14:58.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.552 "is_configured": true, 00:14:58.552 "data_offset": 2048, 00:14:58.552 "data_size": 63488 00:14:58.552 } 00:14:58.552 ] 00:14:58.552 }' 00:14:58.552 17:56:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.552 17:56:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 [2024-10-25 17:56:17.156377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.812 [2024-10-25 17:56:17.156423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.812 [2024-10-25 17:56:17.156536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.812 [2024-10-25 17:56:17.156600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.812 [2024-10-25 17:56:17.156614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.812 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 [2024-10-25 17:56:17.244198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.812 [2024-10-25 17:56:17.244288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.812 [2024-10-25 17:56:17.244308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:58.812 [2024-10-25 17:56:17.244320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.812 [2024-10-25 17:56:17.246667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.812 [2024-10-25 17:56:17.246715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.812 [2024-10-25 17:56:17.246809] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.812 [2024-10-25 17:56:17.246872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.072 pt2 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.072 "name": "raid_bdev1", 00:14:59.072 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:59.072 "strip_size_kb": 64, 00:14:59.072 "state": "configuring", 00:14:59.072 "raid_level": "raid5f", 00:14:59.072 "superblock": true, 00:14:59.072 "num_base_bdevs": 3, 00:14:59.072 "num_base_bdevs_discovered": 1, 00:14:59.072 "num_base_bdevs_operational": 2, 00:14:59.072 "base_bdevs_list": [ 00:14:59.072 { 00:14:59.072 "name": null, 00:14:59.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.072 "is_configured": false, 00:14:59.072 "data_offset": 2048, 00:14:59.072 "data_size": 63488 00:14:59.072 }, 00:14:59.072 { 00:14:59.072 "name": "pt2", 00:14:59.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.072 "is_configured": true, 00:14:59.072 "data_offset": 2048, 00:14:59.072 "data_size": 63488 00:14:59.072 }, 00:14:59.072 { 00:14:59.072 "name": null, 00:14:59.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.072 "is_configured": false, 00:14:59.072 "data_offset": 2048, 00:14:59.072 "data_size": 63488 00:14:59.072 } 00:14:59.072 ] 00:14:59.072 }' 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.072 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.332 [2024-10-25 17:56:17.727379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:59.332 [2024-10-25 17:56:17.727464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.332 [2024-10-25 17:56:17.727488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:59.332 [2024-10-25 17:56:17.727500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.332 [2024-10-25 17:56:17.728030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.332 [2024-10-25 17:56:17.728063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:59.332 [2024-10-25 17:56:17.728154] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:59.332 [2024-10-25 17:56:17.728195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:59.332 [2024-10-25 17:56:17.728329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:59.332 [2024-10-25 17:56:17.728367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:59.332 [2024-10-25 17:56:17.728628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:59.332 [2024-10-25 17:56:17.733846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:59.332 [2024-10-25 17:56:17.733873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:59.332 [2024-10-25 17:56:17.734234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.332 pt3 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.332 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.592 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.592 "name": "raid_bdev1", 00:14:59.592 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:59.592 "strip_size_kb": 64, 00:14:59.592 "state": "online", 00:14:59.592 "raid_level": "raid5f", 00:14:59.592 "superblock": true, 00:14:59.592 "num_base_bdevs": 3, 00:14:59.592 "num_base_bdevs_discovered": 2, 00:14:59.592 "num_base_bdevs_operational": 2, 00:14:59.592 "base_bdevs_list": [ 00:14:59.592 { 00:14:59.592 "name": null, 00:14:59.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.592 "is_configured": false, 00:14:59.592 "data_offset": 2048, 00:14:59.592 "data_size": 63488 00:14:59.592 }, 00:14:59.592 { 00:14:59.592 "name": "pt2", 00:14:59.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.592 "is_configured": true, 00:14:59.592 "data_offset": 2048, 00:14:59.592 "data_size": 63488 00:14:59.592 }, 00:14:59.592 { 00:14:59.592 "name": "pt3", 00:14:59.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.592 "is_configured": true, 00:14:59.592 "data_offset": 2048, 00:14:59.592 "data_size": 63488 00:14:59.592 } 00:14:59.592 ] 00:14:59.592 }' 00:14:59.592 17:56:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.592 17:56:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 [2024-10-25 17:56:18.156549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.852 [2024-10-25 17:56:18.156592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.852 [2024-10-25 17:56:18.156685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.852 [2024-10-25 17:56:18.156781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.852 [2024-10-25 17:56:18.156798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 [2024-10-25 17:56:18.216520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.852 [2024-10-25 17:56:18.216589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.852 [2024-10-25 17:56:18.216613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:59.852 [2024-10-25 17:56:18.216624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.852 [2024-10-25 17:56:18.219437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.852 [2024-10-25 17:56:18.219480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.852 [2024-10-25 17:56:18.219574] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:59.852 [2024-10-25 17:56:18.219628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.852 [2024-10-25 17:56:18.219783] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:59.852 [2024-10-25 17:56:18.219806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.852 [2024-10-25 17:56:18.219866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:59.852 [2024-10-25 17:56:18.219964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.852 pt1 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.852 "name": "raid_bdev1", 00:14:59.852 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:14:59.852 "strip_size_kb": 64, 00:14:59.852 "state": "configuring", 00:14:59.852 "raid_level": "raid5f", 00:14:59.852 "superblock": true, 00:14:59.852 "num_base_bdevs": 3, 00:14:59.852 "num_base_bdevs_discovered": 1, 00:14:59.852 "num_base_bdevs_operational": 2, 00:14:59.852 "base_bdevs_list": [ 00:14:59.852 { 00:14:59.852 "name": null, 00:14:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.852 "is_configured": false, 00:14:59.852 "data_offset": 2048, 00:14:59.852 "data_size": 63488 00:14:59.852 }, 00:14:59.852 { 00:14:59.852 "name": "pt2", 00:14:59.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.852 "is_configured": true, 00:14:59.852 "data_offset": 2048, 00:14:59.852 "data_size": 63488 00:14:59.852 }, 00:14:59.852 { 00:14:59.852 "name": null, 00:14:59.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.852 "is_configured": false, 00:14:59.852 "data_offset": 2048, 00:14:59.852 "data_size": 63488 00:14:59.852 } 00:14:59.852 ] 00:14:59.852 }' 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.852 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.422 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.423 [2024-10-25 17:56:18.711860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.423 [2024-10-25 17:56:18.711928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.423 [2024-10-25 17:56:18.711967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:00.423 [2024-10-25 17:56:18.711979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.423 [2024-10-25 17:56:18.712523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.423 [2024-10-25 17:56:18.712553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.423 [2024-10-25 17:56:18.712646] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:00.423 [2024-10-25 17:56:18.712675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.423 [2024-10-25 17:56:18.712841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:00.423 [2024-10-25 17:56:18.712859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.423 [2024-10-25 17:56:18.713133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:00.423 [2024-10-25 17:56:18.719134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:00.423 [2024-10-25 17:56:18.719162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:00.423 [2024-10-25 17:56:18.719415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.423 pt3 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.423 "name": "raid_bdev1", 00:15:00.423 "uuid": "33e18edf-49dc-43bd-8032-484febd2336e", 00:15:00.423 "strip_size_kb": 64, 00:15:00.423 "state": "online", 00:15:00.423 "raid_level": "raid5f", 00:15:00.423 "superblock": true, 00:15:00.423 "num_base_bdevs": 3, 00:15:00.423 "num_base_bdevs_discovered": 2, 00:15:00.423 "num_base_bdevs_operational": 2, 00:15:00.423 "base_bdevs_list": [ 00:15:00.423 { 00:15:00.423 "name": null, 00:15:00.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.423 "is_configured": false, 00:15:00.423 "data_offset": 2048, 00:15:00.423 "data_size": 63488 00:15:00.423 }, 00:15:00.423 { 00:15:00.423 "name": "pt2", 00:15:00.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.423 "is_configured": true, 00:15:00.423 "data_offset": 2048, 00:15:00.423 "data_size": 63488 00:15:00.423 }, 00:15:00.423 { 00:15:00.423 "name": "pt3", 00:15:00.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.423 "is_configured": true, 00:15:00.423 "data_offset": 2048, 00:15:00.423 "data_size": 63488 00:15:00.423 } 00:15:00.423 ] 00:15:00.423 }' 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.423 17:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:00.993 [2024-10-25 17:56:19.194109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 33e18edf-49dc-43bd-8032-484febd2336e '!=' 33e18edf-49dc-43bd-8032-484febd2336e ']' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81043 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81043 ']' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81043 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81043 00:15:00.993 killing process with pid 81043 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81043' 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81043 00:15:00.993 [2024-10-25 17:56:19.278416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.993 17:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81043 00:15:00.993 [2024-10-25 17:56:19.278528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.993 [2024-10-25 17:56:19.278603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.993 [2024-10-25 17:56:19.278617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:01.258 [2024-10-25 17:56:19.578832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.647 17:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:02.647 00:15:02.647 real 0m7.728s 00:15:02.647 user 0m12.037s 00:15:02.647 sys 0m1.407s 00:15:02.647 17:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.647 17:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 ************************************ 00:15:02.647 END TEST raid5f_superblock_test 00:15:02.647 ************************************ 00:15:02.647 17:56:20 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:02.647 17:56:20 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:02.647 17:56:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:02.647 17:56:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.647 17:56:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 ************************************ 00:15:02.647 START TEST raid5f_rebuild_test 00:15:02.647 ************************************ 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.647 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81487 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81487 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81487 ']' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.648 17:56:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.648 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.648 Zero copy mechanism will not be used. 00:15:02.648 [2024-10-25 17:56:20.871714] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:02.648 [2024-10-25 17:56:20.871864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81487 ] 00:15:02.648 [2024-10-25 17:56:21.042593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.906 [2024-10-25 17:56:21.161025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.166 [2024-10-25 17:56:21.361034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.166 [2024-10-25 17:56:21.361086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.425 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 BaseBdev1_malloc 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 [2024-10-25 17:56:21.767280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.426 [2024-10-25 17:56:21.767374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.426 [2024-10-25 17:56:21.767399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.426 [2024-10-25 17:56:21.767413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.426 [2024-10-25 17:56:21.769827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.426 [2024-10-25 17:56:21.769884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.426 BaseBdev1 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 BaseBdev2_malloc 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 [2024-10-25 17:56:21.823751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.426 [2024-10-25 17:56:21.823822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.426 [2024-10-25 17:56:21.823854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.426 [2024-10-25 17:56:21.823867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.426 [2024-10-25 17:56:21.826101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.426 [2024-10-25 17:56:21.826145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.426 BaseBdev2 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.426 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 BaseBdev3_malloc 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 [2024-10-25 17:56:21.894665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:03.686 [2024-10-25 17:56:21.894737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.686 [2024-10-25 17:56:21.894756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:03.686 [2024-10-25 17:56:21.894768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.686 [2024-10-25 17:56:21.896950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.686 [2024-10-25 17:56:21.896993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:03.686 BaseBdev3 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 spare_malloc 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 spare_delay 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 [2024-10-25 17:56:21.955277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.686 [2024-10-25 17:56:21.955350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.686 [2024-10-25 17:56:21.955367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.686 [2024-10-25 17:56:21.955377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.686 [2024-10-25 17:56:21.957454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.686 [2024-10-25 17:56:21.957496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.686 spare 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 [2024-10-25 17:56:21.967324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.686 [2024-10-25 17:56:21.969112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.686 [2024-10-25 17:56:21.969179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.686 [2024-10-25 17:56:21.969258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.686 [2024-10-25 17:56:21.969287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:03.686 [2024-10-25 17:56:21.969535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:03.686 [2024-10-25 17:56:21.975409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.686 [2024-10-25 17:56:21.975436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.686 [2024-10-25 17:56:21.975621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 17:56:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.686 "name": "raid_bdev1", 00:15:03.686 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:03.686 "strip_size_kb": 64, 00:15:03.686 "state": "online", 00:15:03.686 "raid_level": "raid5f", 00:15:03.686 "superblock": false, 00:15:03.686 "num_base_bdevs": 3, 00:15:03.686 "num_base_bdevs_discovered": 3, 00:15:03.687 "num_base_bdevs_operational": 3, 00:15:03.687 "base_bdevs_list": [ 00:15:03.687 { 00:15:03.687 "name": "BaseBdev1", 00:15:03.687 "uuid": "b86ad2b2-b7a8-5e73-bbac-3684c67db649", 00:15:03.687 "is_configured": true, 00:15:03.687 "data_offset": 0, 00:15:03.687 "data_size": 65536 00:15:03.687 }, 00:15:03.687 { 00:15:03.687 "name": "BaseBdev2", 00:15:03.687 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:03.687 "is_configured": true, 00:15:03.687 "data_offset": 0, 00:15:03.687 "data_size": 65536 00:15:03.687 }, 00:15:03.687 { 00:15:03.687 "name": "BaseBdev3", 00:15:03.687 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:03.687 "is_configured": true, 00:15:03.687 "data_offset": 0, 00:15:03.687 "data_size": 65536 00:15:03.687 } 00:15:03.687 ] 00:15:03.687 }' 00:15:03.687 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.687 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.946 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:03.946 [2024-10-25 17:56:22.381664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.206 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.465 [2024-10-25 17:56:22.661031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:04.465 /dev/nbd0 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.465 1+0 records in 00:15:04.465 1+0 records out 00:15:04.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437198 s, 9.4 MB/s 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:04.465 17:56:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:04.726 512+0 records in 00:15:04.726 512+0 records out 00:15:04.726 67108864 bytes (67 MB, 64 MiB) copied, 0.381469 s, 176 MB/s 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.726 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.987 [2024-10-25 17:56:23.335938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.987 [2024-10-25 17:56:23.348042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.987 "name": "raid_bdev1", 00:15:04.987 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:04.987 "strip_size_kb": 64, 00:15:04.987 "state": "online", 00:15:04.987 "raid_level": "raid5f", 00:15:04.987 "superblock": false, 00:15:04.987 "num_base_bdevs": 3, 00:15:04.987 "num_base_bdevs_discovered": 2, 00:15:04.987 "num_base_bdevs_operational": 2, 00:15:04.987 "base_bdevs_list": [ 00:15:04.987 { 00:15:04.987 "name": null, 00:15:04.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.987 "is_configured": false, 00:15:04.987 "data_offset": 0, 00:15:04.987 "data_size": 65536 00:15:04.987 }, 00:15:04.987 { 00:15:04.987 "name": "BaseBdev2", 00:15:04.987 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:04.987 "is_configured": true, 00:15:04.987 "data_offset": 0, 00:15:04.987 "data_size": 65536 00:15:04.987 }, 00:15:04.987 { 00:15:04.987 "name": "BaseBdev3", 00:15:04.987 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:04.987 "is_configured": true, 00:15:04.987 "data_offset": 0, 00:15:04.987 "data_size": 65536 00:15:04.987 } 00:15:04.987 ] 00:15:04.987 }' 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.987 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.556 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.556 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.556 [2024-10-25 17:56:23.815267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.556 [2024-10-25 17:56:23.835193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:05.556 17:56:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.556 17:56:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:05.556 [2024-10-25 17:56:23.845200] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.492 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.492 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.493 "name": "raid_bdev1", 00:15:06.493 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:06.493 "strip_size_kb": 64, 00:15:06.493 "state": "online", 00:15:06.493 "raid_level": "raid5f", 00:15:06.493 "superblock": false, 00:15:06.493 "num_base_bdevs": 3, 00:15:06.493 "num_base_bdevs_discovered": 3, 00:15:06.493 "num_base_bdevs_operational": 3, 00:15:06.493 "process": { 00:15:06.493 "type": "rebuild", 00:15:06.493 "target": "spare", 00:15:06.493 "progress": { 00:15:06.493 "blocks": 20480, 00:15:06.493 "percent": 15 00:15:06.493 } 00:15:06.493 }, 00:15:06.493 "base_bdevs_list": [ 00:15:06.493 { 00:15:06.493 "name": "spare", 00:15:06.493 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:06.493 "is_configured": true, 00:15:06.493 "data_offset": 0, 00:15:06.493 "data_size": 65536 00:15:06.493 }, 00:15:06.493 { 00:15:06.493 "name": "BaseBdev2", 00:15:06.493 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:06.493 "is_configured": true, 00:15:06.493 "data_offset": 0, 00:15:06.493 "data_size": 65536 00:15:06.493 }, 00:15:06.493 { 00:15:06.493 "name": "BaseBdev3", 00:15:06.493 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:06.493 "is_configured": true, 00:15:06.493 "data_offset": 0, 00:15:06.493 "data_size": 65536 00:15:06.493 } 00:15:06.493 ] 00:15:06.493 }' 00:15:06.493 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.751 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.751 17:56:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.751 [2024-10-25 17:56:25.012128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.751 [2024-10-25 17:56:25.056119] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.751 [2024-10-25 17:56:25.056213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.751 [2024-10-25 17:56:25.056235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.751 [2024-10-25 17:56:25.056245] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.751 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.752 "name": "raid_bdev1", 00:15:06.752 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:06.752 "strip_size_kb": 64, 00:15:06.752 "state": "online", 00:15:06.752 "raid_level": "raid5f", 00:15:06.752 "superblock": false, 00:15:06.752 "num_base_bdevs": 3, 00:15:06.752 "num_base_bdevs_discovered": 2, 00:15:06.752 "num_base_bdevs_operational": 2, 00:15:06.752 "base_bdevs_list": [ 00:15:06.752 { 00:15:06.752 "name": null, 00:15:06.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.752 "is_configured": false, 00:15:06.752 "data_offset": 0, 00:15:06.752 "data_size": 65536 00:15:06.752 }, 00:15:06.752 { 00:15:06.752 "name": "BaseBdev2", 00:15:06.752 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:06.752 "is_configured": true, 00:15:06.752 "data_offset": 0, 00:15:06.752 "data_size": 65536 00:15:06.752 }, 00:15:06.752 { 00:15:06.752 "name": "BaseBdev3", 00:15:06.752 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:06.752 "is_configured": true, 00:15:06.752 "data_offset": 0, 00:15:06.752 "data_size": 65536 00:15:06.752 } 00:15:06.752 ] 00:15:06.752 }' 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.752 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.320 "name": "raid_bdev1", 00:15:07.320 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:07.320 "strip_size_kb": 64, 00:15:07.320 "state": "online", 00:15:07.320 "raid_level": "raid5f", 00:15:07.320 "superblock": false, 00:15:07.320 "num_base_bdevs": 3, 00:15:07.320 "num_base_bdevs_discovered": 2, 00:15:07.320 "num_base_bdevs_operational": 2, 00:15:07.320 "base_bdevs_list": [ 00:15:07.320 { 00:15:07.320 "name": null, 00:15:07.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.320 "is_configured": false, 00:15:07.320 "data_offset": 0, 00:15:07.320 "data_size": 65536 00:15:07.320 }, 00:15:07.320 { 00:15:07.320 "name": "BaseBdev2", 00:15:07.320 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:07.320 "is_configured": true, 00:15:07.320 "data_offset": 0, 00:15:07.320 "data_size": 65536 00:15:07.320 }, 00:15:07.320 { 00:15:07.320 "name": "BaseBdev3", 00:15:07.320 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:07.320 "is_configured": true, 00:15:07.320 "data_offset": 0, 00:15:07.320 "data_size": 65536 00:15:07.320 } 00:15:07.320 ] 00:15:07.320 }' 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.320 [2024-10-25 17:56:25.687652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.320 [2024-10-25 17:56:25.703743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.320 17:56:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:07.320 [2024-10-25 17:56:25.711325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.308 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.308 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.308 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.308 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.309 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.567 "name": "raid_bdev1", 00:15:08.567 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:08.567 "strip_size_kb": 64, 00:15:08.567 "state": "online", 00:15:08.567 "raid_level": "raid5f", 00:15:08.567 "superblock": false, 00:15:08.567 "num_base_bdevs": 3, 00:15:08.567 "num_base_bdevs_discovered": 3, 00:15:08.567 "num_base_bdevs_operational": 3, 00:15:08.567 "process": { 00:15:08.567 "type": "rebuild", 00:15:08.567 "target": "spare", 00:15:08.567 "progress": { 00:15:08.567 "blocks": 20480, 00:15:08.567 "percent": 15 00:15:08.567 } 00:15:08.567 }, 00:15:08.567 "base_bdevs_list": [ 00:15:08.567 { 00:15:08.567 "name": "spare", 00:15:08.567 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 }, 00:15:08.567 { 00:15:08.567 "name": "BaseBdev2", 00:15:08.567 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 }, 00:15:08.567 { 00:15:08.567 "name": "BaseBdev3", 00:15:08.567 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 } 00:15:08.567 ] 00:15:08.567 }' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.567 "name": "raid_bdev1", 00:15:08.567 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:08.567 "strip_size_kb": 64, 00:15:08.567 "state": "online", 00:15:08.567 "raid_level": "raid5f", 00:15:08.567 "superblock": false, 00:15:08.567 "num_base_bdevs": 3, 00:15:08.567 "num_base_bdevs_discovered": 3, 00:15:08.567 "num_base_bdevs_operational": 3, 00:15:08.567 "process": { 00:15:08.567 "type": "rebuild", 00:15:08.567 "target": "spare", 00:15:08.567 "progress": { 00:15:08.567 "blocks": 22528, 00:15:08.567 "percent": 17 00:15:08.567 } 00:15:08.567 }, 00:15:08.567 "base_bdevs_list": [ 00:15:08.567 { 00:15:08.567 "name": "spare", 00:15:08.567 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 }, 00:15:08.567 { 00:15:08.567 "name": "BaseBdev2", 00:15:08.567 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 }, 00:15:08.567 { 00:15:08.567 "name": "BaseBdev3", 00:15:08.567 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:08.567 "is_configured": true, 00:15:08.567 "data_offset": 0, 00:15:08.567 "data_size": 65536 00:15:08.567 } 00:15:08.567 ] 00:15:08.567 }' 00:15:08.567 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.568 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.568 17:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.826 17:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.827 17:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.766 "name": "raid_bdev1", 00:15:09.766 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:09.766 "strip_size_kb": 64, 00:15:09.766 "state": "online", 00:15:09.766 "raid_level": "raid5f", 00:15:09.766 "superblock": false, 00:15:09.766 "num_base_bdevs": 3, 00:15:09.766 "num_base_bdevs_discovered": 3, 00:15:09.766 "num_base_bdevs_operational": 3, 00:15:09.766 "process": { 00:15:09.766 "type": "rebuild", 00:15:09.766 "target": "spare", 00:15:09.766 "progress": { 00:15:09.766 "blocks": 45056, 00:15:09.766 "percent": 34 00:15:09.766 } 00:15:09.766 }, 00:15:09.766 "base_bdevs_list": [ 00:15:09.766 { 00:15:09.766 "name": "spare", 00:15:09.766 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:09.766 "is_configured": true, 00:15:09.766 "data_offset": 0, 00:15:09.766 "data_size": 65536 00:15:09.766 }, 00:15:09.766 { 00:15:09.766 "name": "BaseBdev2", 00:15:09.766 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:09.766 "is_configured": true, 00:15:09.766 "data_offset": 0, 00:15:09.766 "data_size": 65536 00:15:09.766 }, 00:15:09.766 { 00:15:09.766 "name": "BaseBdev3", 00:15:09.766 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:09.766 "is_configured": true, 00:15:09.766 "data_offset": 0, 00:15:09.766 "data_size": 65536 00:15:09.766 } 00:15:09.766 ] 00:15:09.766 }' 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.766 17:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.146 "name": "raid_bdev1", 00:15:11.146 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:11.146 "strip_size_kb": 64, 00:15:11.146 "state": "online", 00:15:11.146 "raid_level": "raid5f", 00:15:11.146 "superblock": false, 00:15:11.146 "num_base_bdevs": 3, 00:15:11.146 "num_base_bdevs_discovered": 3, 00:15:11.146 "num_base_bdevs_operational": 3, 00:15:11.146 "process": { 00:15:11.146 "type": "rebuild", 00:15:11.146 "target": "spare", 00:15:11.146 "progress": { 00:15:11.146 "blocks": 69632, 00:15:11.146 "percent": 53 00:15:11.146 } 00:15:11.146 }, 00:15:11.146 "base_bdevs_list": [ 00:15:11.146 { 00:15:11.146 "name": "spare", 00:15:11.146 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:11.146 "is_configured": true, 00:15:11.146 "data_offset": 0, 00:15:11.146 "data_size": 65536 00:15:11.146 }, 00:15:11.146 { 00:15:11.146 "name": "BaseBdev2", 00:15:11.146 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:11.146 "is_configured": true, 00:15:11.146 "data_offset": 0, 00:15:11.146 "data_size": 65536 00:15:11.146 }, 00:15:11.146 { 00:15:11.146 "name": "BaseBdev3", 00:15:11.146 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:11.146 "is_configured": true, 00:15:11.146 "data_offset": 0, 00:15:11.146 "data_size": 65536 00:15:11.146 } 00:15:11.146 ] 00:15:11.146 }' 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.146 17:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.085 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.085 "name": "raid_bdev1", 00:15:12.085 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:12.085 "strip_size_kb": 64, 00:15:12.085 "state": "online", 00:15:12.085 "raid_level": "raid5f", 00:15:12.085 "superblock": false, 00:15:12.085 "num_base_bdevs": 3, 00:15:12.085 "num_base_bdevs_discovered": 3, 00:15:12.085 "num_base_bdevs_operational": 3, 00:15:12.085 "process": { 00:15:12.085 "type": "rebuild", 00:15:12.085 "target": "spare", 00:15:12.085 "progress": { 00:15:12.085 "blocks": 92160, 00:15:12.085 "percent": 70 00:15:12.085 } 00:15:12.085 }, 00:15:12.085 "base_bdevs_list": [ 00:15:12.085 { 00:15:12.085 "name": "spare", 00:15:12.085 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:12.085 "is_configured": true, 00:15:12.085 "data_offset": 0, 00:15:12.085 "data_size": 65536 00:15:12.085 }, 00:15:12.085 { 00:15:12.085 "name": "BaseBdev2", 00:15:12.085 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:12.085 "is_configured": true, 00:15:12.085 "data_offset": 0, 00:15:12.085 "data_size": 65536 00:15:12.085 }, 00:15:12.085 { 00:15:12.085 "name": "BaseBdev3", 00:15:12.086 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:12.086 "is_configured": true, 00:15:12.086 "data_offset": 0, 00:15:12.086 "data_size": 65536 00:15:12.086 } 00:15:12.086 ] 00:15:12.086 }' 00:15:12.086 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.086 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.086 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.086 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.086 17:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.026 17:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.286 "name": "raid_bdev1", 00:15:13.286 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:13.286 "strip_size_kb": 64, 00:15:13.286 "state": "online", 00:15:13.286 "raid_level": "raid5f", 00:15:13.286 "superblock": false, 00:15:13.286 "num_base_bdevs": 3, 00:15:13.286 "num_base_bdevs_discovered": 3, 00:15:13.286 "num_base_bdevs_operational": 3, 00:15:13.286 "process": { 00:15:13.286 "type": "rebuild", 00:15:13.286 "target": "spare", 00:15:13.286 "progress": { 00:15:13.286 "blocks": 114688, 00:15:13.286 "percent": 87 00:15:13.286 } 00:15:13.286 }, 00:15:13.286 "base_bdevs_list": [ 00:15:13.286 { 00:15:13.286 "name": "spare", 00:15:13.286 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:13.286 "is_configured": true, 00:15:13.286 "data_offset": 0, 00:15:13.286 "data_size": 65536 00:15:13.286 }, 00:15:13.286 { 00:15:13.286 "name": "BaseBdev2", 00:15:13.286 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:13.286 "is_configured": true, 00:15:13.286 "data_offset": 0, 00:15:13.286 "data_size": 65536 00:15:13.286 }, 00:15:13.286 { 00:15:13.286 "name": "BaseBdev3", 00:15:13.286 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:13.286 "is_configured": true, 00:15:13.286 "data_offset": 0, 00:15:13.286 "data_size": 65536 00:15:13.286 } 00:15:13.286 ] 00:15:13.286 }' 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.286 17:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.854 [2024-10-25 17:56:32.180876] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:13.854 [2024-10-25 17:56:32.181007] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:13.854 [2024-10-25 17:56:32.181063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.434 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.434 "name": "raid_bdev1", 00:15:14.434 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:14.434 "strip_size_kb": 64, 00:15:14.434 "state": "online", 00:15:14.434 "raid_level": "raid5f", 00:15:14.434 "superblock": false, 00:15:14.434 "num_base_bdevs": 3, 00:15:14.434 "num_base_bdevs_discovered": 3, 00:15:14.434 "num_base_bdevs_operational": 3, 00:15:14.434 "base_bdevs_list": [ 00:15:14.434 { 00:15:14.434 "name": "spare", 00:15:14.434 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:14.434 "is_configured": true, 00:15:14.434 "data_offset": 0, 00:15:14.434 "data_size": 65536 00:15:14.434 }, 00:15:14.434 { 00:15:14.434 "name": "BaseBdev2", 00:15:14.434 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:14.434 "is_configured": true, 00:15:14.434 "data_offset": 0, 00:15:14.434 "data_size": 65536 00:15:14.434 }, 00:15:14.434 { 00:15:14.434 "name": "BaseBdev3", 00:15:14.434 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:14.434 "is_configured": true, 00:15:14.435 "data_offset": 0, 00:15:14.435 "data_size": 65536 00:15:14.435 } 00:15:14.435 ] 00:15:14.435 }' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.435 "name": "raid_bdev1", 00:15:14.435 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:14.435 "strip_size_kb": 64, 00:15:14.435 "state": "online", 00:15:14.435 "raid_level": "raid5f", 00:15:14.435 "superblock": false, 00:15:14.435 "num_base_bdevs": 3, 00:15:14.435 "num_base_bdevs_discovered": 3, 00:15:14.435 "num_base_bdevs_operational": 3, 00:15:14.435 "base_bdevs_list": [ 00:15:14.435 { 00:15:14.435 "name": "spare", 00:15:14.435 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:14.435 "is_configured": true, 00:15:14.435 "data_offset": 0, 00:15:14.435 "data_size": 65536 00:15:14.435 }, 00:15:14.435 { 00:15:14.435 "name": "BaseBdev2", 00:15:14.435 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:14.435 "is_configured": true, 00:15:14.435 "data_offset": 0, 00:15:14.435 "data_size": 65536 00:15:14.435 }, 00:15:14.435 { 00:15:14.435 "name": "BaseBdev3", 00:15:14.435 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:14.435 "is_configured": true, 00:15:14.435 "data_offset": 0, 00:15:14.435 "data_size": 65536 00:15:14.435 } 00:15:14.435 ] 00:15:14.435 }' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.435 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.695 "name": "raid_bdev1", 00:15:14.695 "uuid": "601f265c-d88c-45f0-9c05-22317895346a", 00:15:14.695 "strip_size_kb": 64, 00:15:14.695 "state": "online", 00:15:14.695 "raid_level": "raid5f", 00:15:14.695 "superblock": false, 00:15:14.695 "num_base_bdevs": 3, 00:15:14.695 "num_base_bdevs_discovered": 3, 00:15:14.695 "num_base_bdevs_operational": 3, 00:15:14.695 "base_bdevs_list": [ 00:15:14.695 { 00:15:14.695 "name": "spare", 00:15:14.695 "uuid": "45a01ca6-2d35-58ec-9486-2d94e68e699f", 00:15:14.695 "is_configured": true, 00:15:14.695 "data_offset": 0, 00:15:14.695 "data_size": 65536 00:15:14.695 }, 00:15:14.695 { 00:15:14.695 "name": "BaseBdev2", 00:15:14.695 "uuid": "2a9b495c-0f3c-5653-a1f9-4a72f5cdb08b", 00:15:14.695 "is_configured": true, 00:15:14.695 "data_offset": 0, 00:15:14.695 "data_size": 65536 00:15:14.695 }, 00:15:14.695 { 00:15:14.695 "name": "BaseBdev3", 00:15:14.695 "uuid": "b47d86e9-c613-5eee-ab40-78b940645cb0", 00:15:14.695 "is_configured": true, 00:15:14.695 "data_offset": 0, 00:15:14.695 "data_size": 65536 00:15:14.695 } 00:15:14.695 ] 00:15:14.695 }' 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.695 17:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.955 [2024-10-25 17:56:33.291018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.955 [2024-10-25 17:56:33.291066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.955 [2024-10-25 17:56:33.291179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.955 [2024-10-25 17:56:33.291270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.955 [2024-10-25 17:56:33.291303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.955 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:15.214 /dev/nbd0 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:15.214 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.214 1+0 records in 00:15:15.214 1+0 records out 00:15:15.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469892 s, 8.7 MB/s 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.215 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:15.474 /dev/nbd1 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.474 1+0 records in 00:15:15.474 1+0 records out 00:15:15.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416643 s, 9.8 MB/s 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.474 17:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.733 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.992 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81487 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81487 ']' 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81487 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81487 00:15:16.250 killing process with pid 81487 00:15:16.250 Received shutdown signal, test time was about 60.000000 seconds 00:15:16.250 00:15:16.250 Latency(us) 00:15:16.250 [2024-10-25T17:56:34.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.250 [2024-10-25T17:56:34.686Z] =================================================================================================================== 00:15:16.250 [2024-10-25T17:56:34.686Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81487' 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81487 00:15:16.250 [2024-10-25 17:56:34.566507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.250 17:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81487 00:15:16.819 [2024-10-25 17:56:34.960323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.758 ************************************ 00:15:17.758 END TEST raid5f_rebuild_test 00:15:17.758 ************************************ 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:17.758 00:15:17.758 real 0m15.314s 00:15:17.758 user 0m18.850s 00:15:17.758 sys 0m1.983s 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.758 17:56:36 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:17.758 17:56:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:17.758 17:56:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.758 17:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.758 ************************************ 00:15:17.758 START TEST raid5f_rebuild_test_sb 00:15:17.758 ************************************ 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81922 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81922 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81922 ']' 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.758 17:56:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.018 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:18.018 Zero copy mechanism will not be used. 00:15:18.018 [2024-10-25 17:56:36.257169] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:18.018 [2024-10-25 17:56:36.257301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81922 ] 00:15:18.018 [2024-10-25 17:56:36.436242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.277 [2024-10-25 17:56:36.563455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.536 [2024-10-25 17:56:36.772452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.536 [2024-10-25 17:56:36.772501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 BaseBdev1_malloc 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 [2024-10-25 17:56:37.156174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.795 [2024-10-25 17:56:37.156259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.795 [2024-10-25 17:56:37.156285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.795 [2024-10-25 17:56:37.156297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.795 [2024-10-25 17:56:37.158673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.795 [2024-10-25 17:56:37.158713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.795 BaseBdev1 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 BaseBdev2_malloc 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 [2024-10-25 17:56:37.211723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:18.795 [2024-10-25 17:56:37.211794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.795 [2024-10-25 17:56:37.211813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:18.795 [2024-10-25 17:56:37.211836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.795 [2024-10-25 17:56:37.214010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.795 [2024-10-25 17:56:37.214048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.795 BaseBdev2 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.795 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.055 BaseBdev3_malloc 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.055 [2024-10-25 17:56:37.279314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:19.055 [2024-10-25 17:56:37.279377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.055 [2024-10-25 17:56:37.279400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.055 [2024-10-25 17:56:37.279411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.055 [2024-10-25 17:56:37.281478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.055 [2024-10-25 17:56:37.281520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.055 BaseBdev3 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.055 spare_malloc 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.055 spare_delay 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.055 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.055 [2024-10-25 17:56:37.345958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.055 [2024-10-25 17:56:37.346023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.055 [2024-10-25 17:56:37.346041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:19.055 [2024-10-25 17:56:37.346052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.056 [2024-10-25 17:56:37.348185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.056 [2024-10-25 17:56:37.348226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.056 spare 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.056 [2024-10-25 17:56:37.358013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.056 [2024-10-25 17:56:37.359881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.056 [2024-10-25 17:56:37.359944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.056 [2024-10-25 17:56:37.360117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:19.056 [2024-10-25 17:56:37.360131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:19.056 [2024-10-25 17:56:37.360424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:19.056 [2024-10-25 17:56:37.365988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:19.056 [2024-10-25 17:56:37.366017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:19.056 [2024-10-25 17:56:37.366228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.056 "name": "raid_bdev1", 00:15:19.056 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:19.056 "strip_size_kb": 64, 00:15:19.056 "state": "online", 00:15:19.056 "raid_level": "raid5f", 00:15:19.056 "superblock": true, 00:15:19.056 "num_base_bdevs": 3, 00:15:19.056 "num_base_bdevs_discovered": 3, 00:15:19.056 "num_base_bdevs_operational": 3, 00:15:19.056 "base_bdevs_list": [ 00:15:19.056 { 00:15:19.056 "name": "BaseBdev1", 00:15:19.056 "uuid": "48fb23c3-e158-5208-bda7-1cae94758d5c", 00:15:19.056 "is_configured": true, 00:15:19.056 "data_offset": 2048, 00:15:19.056 "data_size": 63488 00:15:19.056 }, 00:15:19.056 { 00:15:19.056 "name": "BaseBdev2", 00:15:19.056 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:19.056 "is_configured": true, 00:15:19.056 "data_offset": 2048, 00:15:19.056 "data_size": 63488 00:15:19.056 }, 00:15:19.056 { 00:15:19.056 "name": "BaseBdev3", 00:15:19.056 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:19.056 "is_configured": true, 00:15:19.056 "data_offset": 2048, 00:15:19.056 "data_size": 63488 00:15:19.056 } 00:15:19.056 ] 00:15:19.056 }' 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.056 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.624 [2024-10-25 17:56:37.820358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.624 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.625 17:56:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:19.884 [2024-10-25 17:56:38.083847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.884 /dev/nbd0 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.884 1+0 records in 00:15:19.884 1+0 records out 00:15:19.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042633 s, 9.6 MB/s 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:19.884 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:20.452 496+0 records in 00:15:20.452 496+0 records out 00:15:20.452 65011712 bytes (65 MB, 62 MiB) copied, 0.419568 s, 155 MB/s 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.452 [2024-10-25 17:56:38.805044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.452 [2024-10-25 17:56:38.821543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.452 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.452 "name": "raid_bdev1", 00:15:20.452 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:20.452 "strip_size_kb": 64, 00:15:20.452 "state": "online", 00:15:20.452 "raid_level": "raid5f", 00:15:20.452 "superblock": true, 00:15:20.452 "num_base_bdevs": 3, 00:15:20.452 "num_base_bdevs_discovered": 2, 00:15:20.452 "num_base_bdevs_operational": 2, 00:15:20.452 "base_bdevs_list": [ 00:15:20.452 { 00:15:20.452 "name": null, 00:15:20.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.452 "is_configured": false, 00:15:20.452 "data_offset": 0, 00:15:20.452 "data_size": 63488 00:15:20.452 }, 00:15:20.452 { 00:15:20.452 "name": "BaseBdev2", 00:15:20.452 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:20.452 "is_configured": true, 00:15:20.452 "data_offset": 2048, 00:15:20.452 "data_size": 63488 00:15:20.452 }, 00:15:20.452 { 00:15:20.452 "name": "BaseBdev3", 00:15:20.452 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:20.452 "is_configured": true, 00:15:20.453 "data_offset": 2048, 00:15:20.453 "data_size": 63488 00:15:20.453 } 00:15:20.453 ] 00:15:20.453 }' 00:15:20.453 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.453 17:56:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.028 17:56:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.028 17:56:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.028 17:56:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.028 [2024-10-25 17:56:39.276772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.028 [2024-10-25 17:56:39.295495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:21.028 17:56:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.028 17:56:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.028 [2024-10-25 17:56:39.303470] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.968 "name": "raid_bdev1", 00:15:21.968 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:21.968 "strip_size_kb": 64, 00:15:21.968 "state": "online", 00:15:21.968 "raid_level": "raid5f", 00:15:21.968 "superblock": true, 00:15:21.968 "num_base_bdevs": 3, 00:15:21.968 "num_base_bdevs_discovered": 3, 00:15:21.968 "num_base_bdevs_operational": 3, 00:15:21.968 "process": { 00:15:21.968 "type": "rebuild", 00:15:21.968 "target": "spare", 00:15:21.968 "progress": { 00:15:21.968 "blocks": 20480, 00:15:21.968 "percent": 16 00:15:21.968 } 00:15:21.968 }, 00:15:21.968 "base_bdevs_list": [ 00:15:21.968 { 00:15:21.968 "name": "spare", 00:15:21.968 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:21.968 "is_configured": true, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 }, 00:15:21.968 { 00:15:21.968 "name": "BaseBdev2", 00:15:21.968 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:21.968 "is_configured": true, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 }, 00:15:21.968 { 00:15:21.968 "name": "BaseBdev3", 00:15:21.968 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:21.968 "is_configured": true, 00:15:21.968 "data_offset": 2048, 00:15:21.968 "data_size": 63488 00:15:21.968 } 00:15:21.968 ] 00:15:21.968 }' 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.968 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.228 [2024-10-25 17:56:40.438655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.228 [2024-10-25 17:56:40.514788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.228 [2024-10-25 17:56:40.514887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.228 [2024-10-25 17:56:40.514908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.228 [2024-10-25 17:56:40.514916] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.228 "name": "raid_bdev1", 00:15:22.228 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:22.228 "strip_size_kb": 64, 00:15:22.228 "state": "online", 00:15:22.228 "raid_level": "raid5f", 00:15:22.228 "superblock": true, 00:15:22.228 "num_base_bdevs": 3, 00:15:22.228 "num_base_bdevs_discovered": 2, 00:15:22.228 "num_base_bdevs_operational": 2, 00:15:22.228 "base_bdevs_list": [ 00:15:22.228 { 00:15:22.228 "name": null, 00:15:22.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.228 "is_configured": false, 00:15:22.228 "data_offset": 0, 00:15:22.228 "data_size": 63488 00:15:22.228 }, 00:15:22.228 { 00:15:22.228 "name": "BaseBdev2", 00:15:22.228 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:22.228 "is_configured": true, 00:15:22.228 "data_offset": 2048, 00:15:22.228 "data_size": 63488 00:15:22.228 }, 00:15:22.228 { 00:15:22.228 "name": "BaseBdev3", 00:15:22.228 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:22.228 "is_configured": true, 00:15:22.228 "data_offset": 2048, 00:15:22.228 "data_size": 63488 00:15:22.228 } 00:15:22.228 ] 00:15:22.228 }' 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.228 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.798 17:56:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.798 "name": "raid_bdev1", 00:15:22.798 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:22.798 "strip_size_kb": 64, 00:15:22.798 "state": "online", 00:15:22.798 "raid_level": "raid5f", 00:15:22.798 "superblock": true, 00:15:22.798 "num_base_bdevs": 3, 00:15:22.798 "num_base_bdevs_discovered": 2, 00:15:22.798 "num_base_bdevs_operational": 2, 00:15:22.798 "base_bdevs_list": [ 00:15:22.798 { 00:15:22.798 "name": null, 00:15:22.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.798 "is_configured": false, 00:15:22.798 "data_offset": 0, 00:15:22.798 "data_size": 63488 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": "BaseBdev2", 00:15:22.798 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 2048, 00:15:22.798 "data_size": 63488 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": "BaseBdev3", 00:15:22.798 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 2048, 00:15:22.798 "data_size": 63488 00:15:22.798 } 00:15:22.798 ] 00:15:22.798 }' 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.798 [2024-10-25 17:56:41.134686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.798 [2024-10-25 17:56:41.153457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.798 17:56:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:22.798 [2024-10-25 17:56:41.162757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.738 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.998 "name": "raid_bdev1", 00:15:23.998 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:23.998 "strip_size_kb": 64, 00:15:23.998 "state": "online", 00:15:23.998 "raid_level": "raid5f", 00:15:23.998 "superblock": true, 00:15:23.998 "num_base_bdevs": 3, 00:15:23.998 "num_base_bdevs_discovered": 3, 00:15:23.998 "num_base_bdevs_operational": 3, 00:15:23.998 "process": { 00:15:23.998 "type": "rebuild", 00:15:23.998 "target": "spare", 00:15:23.998 "progress": { 00:15:23.998 "blocks": 18432, 00:15:23.998 "percent": 14 00:15:23.998 } 00:15:23.998 }, 00:15:23.998 "base_bdevs_list": [ 00:15:23.998 { 00:15:23.998 "name": "spare", 00:15:23.998 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:23.998 "is_configured": true, 00:15:23.998 "data_offset": 2048, 00:15:23.998 "data_size": 63488 00:15:23.998 }, 00:15:23.998 { 00:15:23.998 "name": "BaseBdev2", 00:15:23.998 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:23.998 "is_configured": true, 00:15:23.998 "data_offset": 2048, 00:15:23.998 "data_size": 63488 00:15:23.998 }, 00:15:23.998 { 00:15:23.998 "name": "BaseBdev3", 00:15:23.998 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:23.998 "is_configured": true, 00:15:23.998 "data_offset": 2048, 00:15:23.998 "data_size": 63488 00:15:23.998 } 00:15:23.998 ] 00:15:23.998 }' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:23.998 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=567 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.998 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.998 "name": "raid_bdev1", 00:15:23.998 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:23.998 "strip_size_kb": 64, 00:15:23.998 "state": "online", 00:15:23.998 "raid_level": "raid5f", 00:15:23.998 "superblock": true, 00:15:23.998 "num_base_bdevs": 3, 00:15:23.998 "num_base_bdevs_discovered": 3, 00:15:23.998 "num_base_bdevs_operational": 3, 00:15:23.998 "process": { 00:15:23.998 "type": "rebuild", 00:15:23.998 "target": "spare", 00:15:23.998 "progress": { 00:15:23.998 "blocks": 24576, 00:15:23.998 "percent": 19 00:15:23.998 } 00:15:23.998 }, 00:15:23.998 "base_bdevs_list": [ 00:15:23.998 { 00:15:23.999 "name": "spare", 00:15:23.999 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:23.999 "is_configured": true, 00:15:23.999 "data_offset": 2048, 00:15:23.999 "data_size": 63488 00:15:23.999 }, 00:15:23.999 { 00:15:23.999 "name": "BaseBdev2", 00:15:23.999 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:23.999 "is_configured": true, 00:15:23.999 "data_offset": 2048, 00:15:23.999 "data_size": 63488 00:15:23.999 }, 00:15:23.999 { 00:15:23.999 "name": "BaseBdev3", 00:15:23.999 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:23.999 "is_configured": true, 00:15:23.999 "data_offset": 2048, 00:15:23.999 "data_size": 63488 00:15:23.999 } 00:15:23.999 ] 00:15:23.999 }' 00:15:23.999 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.259 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.259 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.259 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.259 17:56:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.197 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.197 "name": "raid_bdev1", 00:15:25.197 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:25.197 "strip_size_kb": 64, 00:15:25.197 "state": "online", 00:15:25.197 "raid_level": "raid5f", 00:15:25.197 "superblock": true, 00:15:25.197 "num_base_bdevs": 3, 00:15:25.197 "num_base_bdevs_discovered": 3, 00:15:25.197 "num_base_bdevs_operational": 3, 00:15:25.197 "process": { 00:15:25.198 "type": "rebuild", 00:15:25.198 "target": "spare", 00:15:25.198 "progress": { 00:15:25.198 "blocks": 47104, 00:15:25.198 "percent": 37 00:15:25.198 } 00:15:25.198 }, 00:15:25.198 "base_bdevs_list": [ 00:15:25.198 { 00:15:25.198 "name": "spare", 00:15:25.198 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:25.198 "is_configured": true, 00:15:25.198 "data_offset": 2048, 00:15:25.198 "data_size": 63488 00:15:25.198 }, 00:15:25.198 { 00:15:25.198 "name": "BaseBdev2", 00:15:25.198 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:25.198 "is_configured": true, 00:15:25.198 "data_offset": 2048, 00:15:25.198 "data_size": 63488 00:15:25.198 }, 00:15:25.198 { 00:15:25.198 "name": "BaseBdev3", 00:15:25.198 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:25.198 "is_configured": true, 00:15:25.198 "data_offset": 2048, 00:15:25.198 "data_size": 63488 00:15:25.198 } 00:15:25.198 ] 00:15:25.198 }' 00:15:25.198 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.198 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.198 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.457 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.457 17:56:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.396 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.397 "name": "raid_bdev1", 00:15:26.397 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:26.397 "strip_size_kb": 64, 00:15:26.397 "state": "online", 00:15:26.397 "raid_level": "raid5f", 00:15:26.397 "superblock": true, 00:15:26.397 "num_base_bdevs": 3, 00:15:26.397 "num_base_bdevs_discovered": 3, 00:15:26.397 "num_base_bdevs_operational": 3, 00:15:26.397 "process": { 00:15:26.397 "type": "rebuild", 00:15:26.397 "target": "spare", 00:15:26.397 "progress": { 00:15:26.397 "blocks": 69632, 00:15:26.397 "percent": 54 00:15:26.397 } 00:15:26.397 }, 00:15:26.397 "base_bdevs_list": [ 00:15:26.397 { 00:15:26.397 "name": "spare", 00:15:26.397 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 2048, 00:15:26.397 "data_size": 63488 00:15:26.397 }, 00:15:26.397 { 00:15:26.397 "name": "BaseBdev2", 00:15:26.397 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 2048, 00:15:26.397 "data_size": 63488 00:15:26.397 }, 00:15:26.397 { 00:15:26.397 "name": "BaseBdev3", 00:15:26.397 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 2048, 00:15:26.397 "data_size": 63488 00:15:26.397 } 00:15:26.397 ] 00:15:26.397 }' 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.397 17:56:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.784 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.784 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.785 "name": "raid_bdev1", 00:15:27.785 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:27.785 "strip_size_kb": 64, 00:15:27.785 "state": "online", 00:15:27.785 "raid_level": "raid5f", 00:15:27.785 "superblock": true, 00:15:27.785 "num_base_bdevs": 3, 00:15:27.785 "num_base_bdevs_discovered": 3, 00:15:27.785 "num_base_bdevs_operational": 3, 00:15:27.785 "process": { 00:15:27.785 "type": "rebuild", 00:15:27.785 "target": "spare", 00:15:27.785 "progress": { 00:15:27.785 "blocks": 94208, 00:15:27.785 "percent": 74 00:15:27.785 } 00:15:27.785 }, 00:15:27.785 "base_bdevs_list": [ 00:15:27.785 { 00:15:27.785 "name": "spare", 00:15:27.785 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:27.785 "is_configured": true, 00:15:27.785 "data_offset": 2048, 00:15:27.785 "data_size": 63488 00:15:27.785 }, 00:15:27.785 { 00:15:27.785 "name": "BaseBdev2", 00:15:27.785 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:27.785 "is_configured": true, 00:15:27.785 "data_offset": 2048, 00:15:27.785 "data_size": 63488 00:15:27.785 }, 00:15:27.785 { 00:15:27.785 "name": "BaseBdev3", 00:15:27.785 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:27.785 "is_configured": true, 00:15:27.785 "data_offset": 2048, 00:15:27.785 "data_size": 63488 00:15:27.785 } 00:15:27.785 ] 00:15:27.785 }' 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.785 17:56:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.745 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.745 "name": "raid_bdev1", 00:15:28.745 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:28.745 "strip_size_kb": 64, 00:15:28.745 "state": "online", 00:15:28.745 "raid_level": "raid5f", 00:15:28.745 "superblock": true, 00:15:28.745 "num_base_bdevs": 3, 00:15:28.745 "num_base_bdevs_discovered": 3, 00:15:28.745 "num_base_bdevs_operational": 3, 00:15:28.745 "process": { 00:15:28.745 "type": "rebuild", 00:15:28.745 "target": "spare", 00:15:28.745 "progress": { 00:15:28.745 "blocks": 116736, 00:15:28.745 "percent": 91 00:15:28.745 } 00:15:28.745 }, 00:15:28.745 "base_bdevs_list": [ 00:15:28.745 { 00:15:28.745 "name": "spare", 00:15:28.745 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:28.745 "is_configured": true, 00:15:28.745 "data_offset": 2048, 00:15:28.745 "data_size": 63488 00:15:28.745 }, 00:15:28.745 { 00:15:28.745 "name": "BaseBdev2", 00:15:28.746 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:28.746 "is_configured": true, 00:15:28.746 "data_offset": 2048, 00:15:28.746 "data_size": 63488 00:15:28.746 }, 00:15:28.746 { 00:15:28.746 "name": "BaseBdev3", 00:15:28.746 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:28.746 "is_configured": true, 00:15:28.746 "data_offset": 2048, 00:15:28.746 "data_size": 63488 00:15:28.746 } 00:15:28.746 ] 00:15:28.746 }' 00:15:28.746 17:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.746 17:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.746 17:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.746 17:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.746 17:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.005 [2024-10-25 17:56:47.415597] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:29.005 [2024-10-25 17:56:47.415691] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:29.005 [2024-10-25 17:56:47.415815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.939 "name": "raid_bdev1", 00:15:29.939 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:29.939 "strip_size_kb": 64, 00:15:29.939 "state": "online", 00:15:29.939 "raid_level": "raid5f", 00:15:29.939 "superblock": true, 00:15:29.939 "num_base_bdevs": 3, 00:15:29.939 "num_base_bdevs_discovered": 3, 00:15:29.939 "num_base_bdevs_operational": 3, 00:15:29.939 "base_bdevs_list": [ 00:15:29.939 { 00:15:29.939 "name": "spare", 00:15:29.939 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:29.939 "is_configured": true, 00:15:29.939 "data_offset": 2048, 00:15:29.939 "data_size": 63488 00:15:29.939 }, 00:15:29.939 { 00:15:29.939 "name": "BaseBdev2", 00:15:29.939 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:29.939 "is_configured": true, 00:15:29.939 "data_offset": 2048, 00:15:29.939 "data_size": 63488 00:15:29.939 }, 00:15:29.939 { 00:15:29.939 "name": "BaseBdev3", 00:15:29.939 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:29.939 "is_configured": true, 00:15:29.939 "data_offset": 2048, 00:15:29.939 "data_size": 63488 00:15:29.939 } 00:15:29.939 ] 00:15:29.939 }' 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.939 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.940 "name": "raid_bdev1", 00:15:29.940 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:29.940 "strip_size_kb": 64, 00:15:29.940 "state": "online", 00:15:29.940 "raid_level": "raid5f", 00:15:29.940 "superblock": true, 00:15:29.940 "num_base_bdevs": 3, 00:15:29.940 "num_base_bdevs_discovered": 3, 00:15:29.940 "num_base_bdevs_operational": 3, 00:15:29.940 "base_bdevs_list": [ 00:15:29.940 { 00:15:29.940 "name": "spare", 00:15:29.940 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:29.940 "is_configured": true, 00:15:29.940 "data_offset": 2048, 00:15:29.940 "data_size": 63488 00:15:29.940 }, 00:15:29.940 { 00:15:29.940 "name": "BaseBdev2", 00:15:29.940 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:29.940 "is_configured": true, 00:15:29.940 "data_offset": 2048, 00:15:29.940 "data_size": 63488 00:15:29.940 }, 00:15:29.940 { 00:15:29.940 "name": "BaseBdev3", 00:15:29.940 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:29.940 "is_configured": true, 00:15:29.940 "data_offset": 2048, 00:15:29.940 "data_size": 63488 00:15:29.940 } 00:15:29.940 ] 00:15:29.940 }' 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.940 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.197 "name": "raid_bdev1", 00:15:30.197 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:30.197 "strip_size_kb": 64, 00:15:30.197 "state": "online", 00:15:30.197 "raid_level": "raid5f", 00:15:30.197 "superblock": true, 00:15:30.197 "num_base_bdevs": 3, 00:15:30.197 "num_base_bdevs_discovered": 3, 00:15:30.197 "num_base_bdevs_operational": 3, 00:15:30.197 "base_bdevs_list": [ 00:15:30.197 { 00:15:30.197 "name": "spare", 00:15:30.197 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:30.197 "is_configured": true, 00:15:30.197 "data_offset": 2048, 00:15:30.197 "data_size": 63488 00:15:30.197 }, 00:15:30.197 { 00:15:30.197 "name": "BaseBdev2", 00:15:30.197 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:30.197 "is_configured": true, 00:15:30.197 "data_offset": 2048, 00:15:30.197 "data_size": 63488 00:15:30.197 }, 00:15:30.197 { 00:15:30.197 "name": "BaseBdev3", 00:15:30.197 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:30.197 "is_configured": true, 00:15:30.197 "data_offset": 2048, 00:15:30.197 "data_size": 63488 00:15:30.197 } 00:15:30.197 ] 00:15:30.197 }' 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.197 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.455 [2024-10-25 17:56:48.820433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.455 [2024-10-25 17:56:48.820473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.455 [2024-10-25 17:56:48.820570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.455 [2024-10-25 17:56:48.820667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.455 [2024-10-25 17:56:48.820689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.455 17:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:30.714 /dev/nbd0 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.714 1+0 records in 00:15:30.714 1+0 records out 00:15:30.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251145 s, 16.3 MB/s 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.714 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:30.973 /dev/nbd1 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.973 1+0 records in 00:15:30.973 1+0 records out 00:15:30.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523962 s, 7.8 MB/s 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.973 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.232 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.492 17:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.751 [2024-10-25 17:56:50.044371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.751 [2024-10-25 17:56:50.044459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.751 [2024-10-25 17:56:50.044485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:31.751 [2024-10-25 17:56:50.044498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.751 [2024-10-25 17:56:50.047077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.751 [2024-10-25 17:56:50.047123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.751 [2024-10-25 17:56:50.047220] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.751 [2024-10-25 17:56:50.047292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.751 [2024-10-25 17:56:50.047442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.751 [2024-10-25 17:56:50.047560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.751 spare 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.751 [2024-10-25 17:56:50.147472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:31.751 [2024-10-25 17:56:50.147513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.751 [2024-10-25 17:56:50.147877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:31.751 [2024-10-25 17:56:50.153744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:31.751 [2024-10-25 17:56:50.153770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:31.751 [2024-10-25 17:56:50.154004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.751 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.010 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.011 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.011 "name": "raid_bdev1", 00:15:32.011 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:32.011 "strip_size_kb": 64, 00:15:32.011 "state": "online", 00:15:32.011 "raid_level": "raid5f", 00:15:32.011 "superblock": true, 00:15:32.011 "num_base_bdevs": 3, 00:15:32.011 "num_base_bdevs_discovered": 3, 00:15:32.011 "num_base_bdevs_operational": 3, 00:15:32.011 "base_bdevs_list": [ 00:15:32.011 { 00:15:32.011 "name": "spare", 00:15:32.011 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:32.011 "is_configured": true, 00:15:32.011 "data_offset": 2048, 00:15:32.011 "data_size": 63488 00:15:32.011 }, 00:15:32.011 { 00:15:32.011 "name": "BaseBdev2", 00:15:32.011 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:32.011 "is_configured": true, 00:15:32.011 "data_offset": 2048, 00:15:32.011 "data_size": 63488 00:15:32.011 }, 00:15:32.011 { 00:15:32.011 "name": "BaseBdev3", 00:15:32.011 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:32.011 "is_configured": true, 00:15:32.011 "data_offset": 2048, 00:15:32.011 "data_size": 63488 00:15:32.011 } 00:15:32.011 ] 00:15:32.011 }' 00:15:32.011 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.011 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.269 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.269 "name": "raid_bdev1", 00:15:32.269 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:32.269 "strip_size_kb": 64, 00:15:32.269 "state": "online", 00:15:32.269 "raid_level": "raid5f", 00:15:32.269 "superblock": true, 00:15:32.269 "num_base_bdevs": 3, 00:15:32.269 "num_base_bdevs_discovered": 3, 00:15:32.269 "num_base_bdevs_operational": 3, 00:15:32.269 "base_bdevs_list": [ 00:15:32.269 { 00:15:32.269 "name": "spare", 00:15:32.269 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:32.269 "is_configured": true, 00:15:32.269 "data_offset": 2048, 00:15:32.269 "data_size": 63488 00:15:32.269 }, 00:15:32.269 { 00:15:32.270 "name": "BaseBdev2", 00:15:32.270 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:32.270 "is_configured": true, 00:15:32.270 "data_offset": 2048, 00:15:32.270 "data_size": 63488 00:15:32.270 }, 00:15:32.270 { 00:15:32.270 "name": "BaseBdev3", 00:15:32.270 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:32.270 "is_configured": true, 00:15:32.270 "data_offset": 2048, 00:15:32.270 "data_size": 63488 00:15:32.270 } 00:15:32.270 ] 00:15:32.270 }' 00:15:32.270 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.270 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.270 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.543 [2024-10-25 17:56:50.776277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.543 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.543 "name": "raid_bdev1", 00:15:32.543 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:32.543 "strip_size_kb": 64, 00:15:32.543 "state": "online", 00:15:32.543 "raid_level": "raid5f", 00:15:32.543 "superblock": true, 00:15:32.543 "num_base_bdevs": 3, 00:15:32.543 "num_base_bdevs_discovered": 2, 00:15:32.543 "num_base_bdevs_operational": 2, 00:15:32.543 "base_bdevs_list": [ 00:15:32.543 { 00:15:32.543 "name": null, 00:15:32.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.543 "is_configured": false, 00:15:32.543 "data_offset": 0, 00:15:32.543 "data_size": 63488 00:15:32.544 }, 00:15:32.544 { 00:15:32.544 "name": "BaseBdev2", 00:15:32.544 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:32.544 "is_configured": true, 00:15:32.544 "data_offset": 2048, 00:15:32.544 "data_size": 63488 00:15:32.544 }, 00:15:32.544 { 00:15:32.544 "name": "BaseBdev3", 00:15:32.544 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:32.544 "is_configured": true, 00:15:32.544 "data_offset": 2048, 00:15:32.544 "data_size": 63488 00:15:32.544 } 00:15:32.544 ] 00:15:32.544 }' 00:15:32.544 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.544 17:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.804 17:56:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:32.804 17:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.804 17:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.804 [2024-10-25 17:56:51.219588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.804 [2024-10-25 17:56:51.219801] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:32.804 [2024-10-25 17:56:51.219820] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:32.804 [2024-10-25 17:56:51.219878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.804 [2024-10-25 17:56:51.236332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:32.804 17:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.804 17:56:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:33.064 [2024-10-25 17:56:51.244187] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.002 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.003 "name": "raid_bdev1", 00:15:34.003 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:34.003 "strip_size_kb": 64, 00:15:34.003 "state": "online", 00:15:34.003 "raid_level": "raid5f", 00:15:34.003 "superblock": true, 00:15:34.003 "num_base_bdevs": 3, 00:15:34.003 "num_base_bdevs_discovered": 3, 00:15:34.003 "num_base_bdevs_operational": 3, 00:15:34.003 "process": { 00:15:34.003 "type": "rebuild", 00:15:34.003 "target": "spare", 00:15:34.003 "progress": { 00:15:34.003 "blocks": 20480, 00:15:34.003 "percent": 16 00:15:34.003 } 00:15:34.003 }, 00:15:34.003 "base_bdevs_list": [ 00:15:34.003 { 00:15:34.003 "name": "spare", 00:15:34.003 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:34.003 "is_configured": true, 00:15:34.003 "data_offset": 2048, 00:15:34.003 "data_size": 63488 00:15:34.003 }, 00:15:34.003 { 00:15:34.003 "name": "BaseBdev2", 00:15:34.003 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:34.003 "is_configured": true, 00:15:34.003 "data_offset": 2048, 00:15:34.003 "data_size": 63488 00:15:34.003 }, 00:15:34.003 { 00:15:34.003 "name": "BaseBdev3", 00:15:34.003 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:34.003 "is_configured": true, 00:15:34.003 "data_offset": 2048, 00:15:34.003 "data_size": 63488 00:15:34.003 } 00:15:34.003 ] 00:15:34.003 }' 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.003 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.003 [2024-10-25 17:56:52.382940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.261 [2024-10-25 17:56:52.452535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.261 [2024-10-25 17:56:52.452627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.261 [2024-10-25 17:56:52.452646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.261 [2024-10-25 17:56:52.452657] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.261 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.262 "name": "raid_bdev1", 00:15:34.262 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:34.262 "strip_size_kb": 64, 00:15:34.262 "state": "online", 00:15:34.262 "raid_level": "raid5f", 00:15:34.262 "superblock": true, 00:15:34.262 "num_base_bdevs": 3, 00:15:34.262 "num_base_bdevs_discovered": 2, 00:15:34.262 "num_base_bdevs_operational": 2, 00:15:34.262 "base_bdevs_list": [ 00:15:34.262 { 00:15:34.262 "name": null, 00:15:34.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.262 "is_configured": false, 00:15:34.262 "data_offset": 0, 00:15:34.262 "data_size": 63488 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "name": "BaseBdev2", 00:15:34.262 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "name": "BaseBdev3", 00:15:34.262 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:34.262 "is_configured": true, 00:15:34.262 "data_offset": 2048, 00:15:34.262 "data_size": 63488 00:15:34.262 } 00:15:34.262 ] 00:15:34.262 }' 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.262 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.522 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:34.522 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.522 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.522 [2024-10-25 17:56:52.928455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:34.522 [2024-10-25 17:56:52.928545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.522 [2024-10-25 17:56:52.928568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:34.522 [2024-10-25 17:56:52.928583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.522 [2024-10-25 17:56:52.929135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.522 [2024-10-25 17:56:52.929169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:34.522 [2024-10-25 17:56:52.929276] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:34.522 [2024-10-25 17:56:52.929294] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.522 [2024-10-25 17:56:52.929304] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:34.522 [2024-10-25 17:56:52.929331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.522 [2024-10-25 17:56:52.945077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:34.522 spare 00:15:34.522 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.522 17:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:34.522 [2024-10-25 17:56:52.952989] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.902 17:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.902 "name": "raid_bdev1", 00:15:35.902 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:35.902 "strip_size_kb": 64, 00:15:35.902 "state": "online", 00:15:35.902 "raid_level": "raid5f", 00:15:35.902 "superblock": true, 00:15:35.902 "num_base_bdevs": 3, 00:15:35.902 "num_base_bdevs_discovered": 3, 00:15:35.902 "num_base_bdevs_operational": 3, 00:15:35.902 "process": { 00:15:35.903 "type": "rebuild", 00:15:35.903 "target": "spare", 00:15:35.903 "progress": { 00:15:35.903 "blocks": 20480, 00:15:35.903 "percent": 16 00:15:35.903 } 00:15:35.903 }, 00:15:35.903 "base_bdevs_list": [ 00:15:35.903 { 00:15:35.903 "name": "spare", 00:15:35.903 "uuid": "97f8c702-142a-5827-8cfb-fc41576f6922", 00:15:35.903 "is_configured": true, 00:15:35.903 "data_offset": 2048, 00:15:35.903 "data_size": 63488 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "name": "BaseBdev2", 00:15:35.903 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:35.903 "is_configured": true, 00:15:35.903 "data_offset": 2048, 00:15:35.903 "data_size": 63488 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "name": "BaseBdev3", 00:15:35.903 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:35.903 "is_configured": true, 00:15:35.903 "data_offset": 2048, 00:15:35.903 "data_size": 63488 00:15:35.903 } 00:15:35.903 ] 00:15:35.903 }' 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.903 [2024-10-25 17:56:54.096645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.903 [2024-10-25 17:56:54.163561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.903 [2024-10-25 17:56:54.163647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.903 [2024-10-25 17:56:54.163683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.903 [2024-10-25 17:56:54.163692] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.903 "name": "raid_bdev1", 00:15:35.903 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:35.903 "strip_size_kb": 64, 00:15:35.903 "state": "online", 00:15:35.903 "raid_level": "raid5f", 00:15:35.903 "superblock": true, 00:15:35.903 "num_base_bdevs": 3, 00:15:35.903 "num_base_bdevs_discovered": 2, 00:15:35.903 "num_base_bdevs_operational": 2, 00:15:35.903 "base_bdevs_list": [ 00:15:35.903 { 00:15:35.903 "name": null, 00:15:35.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.903 "is_configured": false, 00:15:35.903 "data_offset": 0, 00:15:35.903 "data_size": 63488 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "name": "BaseBdev2", 00:15:35.903 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:35.903 "is_configured": true, 00:15:35.903 "data_offset": 2048, 00:15:35.903 "data_size": 63488 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "name": "BaseBdev3", 00:15:35.903 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:35.903 "is_configured": true, 00:15:35.903 "data_offset": 2048, 00:15:35.903 "data_size": 63488 00:15:35.903 } 00:15:35.903 ] 00:15:35.903 }' 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.903 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.472 "name": "raid_bdev1", 00:15:36.472 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:36.472 "strip_size_kb": 64, 00:15:36.472 "state": "online", 00:15:36.472 "raid_level": "raid5f", 00:15:36.472 "superblock": true, 00:15:36.472 "num_base_bdevs": 3, 00:15:36.472 "num_base_bdevs_discovered": 2, 00:15:36.472 "num_base_bdevs_operational": 2, 00:15:36.472 "base_bdevs_list": [ 00:15:36.472 { 00:15:36.472 "name": null, 00:15:36.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.472 "is_configured": false, 00:15:36.472 "data_offset": 0, 00:15:36.472 "data_size": 63488 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "name": "BaseBdev2", 00:15:36.472 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:36.472 "is_configured": true, 00:15:36.472 "data_offset": 2048, 00:15:36.472 "data_size": 63488 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "name": "BaseBdev3", 00:15:36.472 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:36.472 "is_configured": true, 00:15:36.472 "data_offset": 2048, 00:15:36.472 "data_size": 63488 00:15:36.472 } 00:15:36.472 ] 00:15:36.472 }' 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.472 [2024-10-25 17:56:54.836540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:36.472 [2024-10-25 17:56:54.836611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.472 [2024-10-25 17:56:54.836637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:36.472 [2024-10-25 17:56:54.836647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.472 [2024-10-25 17:56:54.837208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.472 [2024-10-25 17:56:54.837239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.472 [2024-10-25 17:56:54.837342] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:36.472 [2024-10-25 17:56:54.837365] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.472 [2024-10-25 17:56:54.837390] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:36.472 [2024-10-25 17:56:54.837402] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:36.472 BaseBdev1 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.472 17:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.850 "name": "raid_bdev1", 00:15:37.850 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:37.850 "strip_size_kb": 64, 00:15:37.850 "state": "online", 00:15:37.850 "raid_level": "raid5f", 00:15:37.850 "superblock": true, 00:15:37.850 "num_base_bdevs": 3, 00:15:37.850 "num_base_bdevs_discovered": 2, 00:15:37.850 "num_base_bdevs_operational": 2, 00:15:37.850 "base_bdevs_list": [ 00:15:37.850 { 00:15:37.850 "name": null, 00:15:37.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.850 "is_configured": false, 00:15:37.850 "data_offset": 0, 00:15:37.850 "data_size": 63488 00:15:37.850 }, 00:15:37.850 { 00:15:37.850 "name": "BaseBdev2", 00:15:37.850 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:37.850 "is_configured": true, 00:15:37.850 "data_offset": 2048, 00:15:37.850 "data_size": 63488 00:15:37.850 }, 00:15:37.850 { 00:15:37.850 "name": "BaseBdev3", 00:15:37.850 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:37.850 "is_configured": true, 00:15:37.850 "data_offset": 2048, 00:15:37.850 "data_size": 63488 00:15:37.850 } 00:15:37.850 ] 00:15:37.850 }' 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.850 17:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.143 "name": "raid_bdev1", 00:15:38.143 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:38.143 "strip_size_kb": 64, 00:15:38.143 "state": "online", 00:15:38.143 "raid_level": "raid5f", 00:15:38.143 "superblock": true, 00:15:38.143 "num_base_bdevs": 3, 00:15:38.143 "num_base_bdevs_discovered": 2, 00:15:38.143 "num_base_bdevs_operational": 2, 00:15:38.143 "base_bdevs_list": [ 00:15:38.143 { 00:15:38.143 "name": null, 00:15:38.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.143 "is_configured": false, 00:15:38.143 "data_offset": 0, 00:15:38.143 "data_size": 63488 00:15:38.143 }, 00:15:38.143 { 00:15:38.143 "name": "BaseBdev2", 00:15:38.143 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:38.143 "is_configured": true, 00:15:38.143 "data_offset": 2048, 00:15:38.143 "data_size": 63488 00:15:38.143 }, 00:15:38.143 { 00:15:38.143 "name": "BaseBdev3", 00:15:38.143 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:38.143 "is_configured": true, 00:15:38.143 "data_offset": 2048, 00:15:38.143 "data_size": 63488 00:15:38.143 } 00:15:38.143 ] 00:15:38.143 }' 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.143 [2024-10-25 17:56:56.458214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.143 [2024-10-25 17:56:56.458408] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:38.143 [2024-10-25 17:56:56.458425] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:38.143 request: 00:15:38.143 { 00:15:38.143 "base_bdev": "BaseBdev1", 00:15:38.143 "raid_bdev": "raid_bdev1", 00:15:38.143 "method": "bdev_raid_add_base_bdev", 00:15:38.143 "req_id": 1 00:15:38.143 } 00:15:38.143 Got JSON-RPC error response 00:15:38.143 response: 00:15:38.143 { 00:15:38.143 "code": -22, 00:15:38.143 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:38.143 } 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:38.143 17:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.111 "name": "raid_bdev1", 00:15:39.111 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:39.111 "strip_size_kb": 64, 00:15:39.111 "state": "online", 00:15:39.111 "raid_level": "raid5f", 00:15:39.111 "superblock": true, 00:15:39.111 "num_base_bdevs": 3, 00:15:39.111 "num_base_bdevs_discovered": 2, 00:15:39.111 "num_base_bdevs_operational": 2, 00:15:39.111 "base_bdevs_list": [ 00:15:39.111 { 00:15:39.111 "name": null, 00:15:39.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.111 "is_configured": false, 00:15:39.111 "data_offset": 0, 00:15:39.111 "data_size": 63488 00:15:39.111 }, 00:15:39.111 { 00:15:39.111 "name": "BaseBdev2", 00:15:39.111 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:39.111 "is_configured": true, 00:15:39.111 "data_offset": 2048, 00:15:39.111 "data_size": 63488 00:15:39.111 }, 00:15:39.111 { 00:15:39.111 "name": "BaseBdev3", 00:15:39.111 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:39.111 "is_configured": true, 00:15:39.111 "data_offset": 2048, 00:15:39.111 "data_size": 63488 00:15:39.111 } 00:15:39.111 ] 00:15:39.111 }' 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.111 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.679 "name": "raid_bdev1", 00:15:39.679 "uuid": "c9ec45f3-f510-4841-a20f-78afb1099e33", 00:15:39.679 "strip_size_kb": 64, 00:15:39.679 "state": "online", 00:15:39.679 "raid_level": "raid5f", 00:15:39.679 "superblock": true, 00:15:39.679 "num_base_bdevs": 3, 00:15:39.679 "num_base_bdevs_discovered": 2, 00:15:39.679 "num_base_bdevs_operational": 2, 00:15:39.679 "base_bdevs_list": [ 00:15:39.679 { 00:15:39.679 "name": null, 00:15:39.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.679 "is_configured": false, 00:15:39.679 "data_offset": 0, 00:15:39.679 "data_size": 63488 00:15:39.679 }, 00:15:39.679 { 00:15:39.679 "name": "BaseBdev2", 00:15:39.679 "uuid": "95e123bd-f35c-53a0-a0da-5ac3a3734f74", 00:15:39.679 "is_configured": true, 00:15:39.679 "data_offset": 2048, 00:15:39.679 "data_size": 63488 00:15:39.679 }, 00:15:39.679 { 00:15:39.679 "name": "BaseBdev3", 00:15:39.679 "uuid": "83a91004-dc60-5f09-9a46-6ad000b9a433", 00:15:39.679 "is_configured": true, 00:15:39.679 "data_offset": 2048, 00:15:39.679 "data_size": 63488 00:15:39.679 } 00:15:39.679 ] 00:15:39.679 }' 00:15:39.679 17:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81922 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81922 ']' 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 81922 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81922 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.679 killing process with pid 81922 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81922' 00:15:39.679 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 81922 00:15:39.679 Received shutdown signal, test time was about 60.000000 seconds 00:15:39.679 00:15:39.680 Latency(us) 00:15:39.680 [2024-10-25T17:56:58.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.680 [2024-10-25T17:56:58.116Z] =================================================================================================================== 00:15:39.680 [2024-10-25T17:56:58.116Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:39.680 [2024-10-25 17:56:58.095163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.680 [2024-10-25 17:56:58.095298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.680 17:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 81922 00:15:39.680 [2024-10-25 17:56:58.095375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.680 [2024-10-25 17:56:58.095389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:40.247 [2024-10-25 17:56:58.490895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.185 17:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:41.185 00:15:41.185 real 0m23.431s 00:15:41.185 user 0m30.007s 00:15:41.185 sys 0m2.849s 00:15:41.185 17:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.185 17:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.185 ************************************ 00:15:41.186 END TEST raid5f_rebuild_test_sb 00:15:41.186 ************************************ 00:15:41.446 17:56:59 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:41.446 17:56:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:41.446 17:56:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:41.446 17:56:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.446 17:56:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 ************************************ 00:15:41.446 START TEST raid5f_state_function_test 00:15:41.446 ************************************ 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82669 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82669' 00:15:41.446 Process raid pid: 82669 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82669 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82669 ']' 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.446 17:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.446 [2024-10-25 17:56:59.753513] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:41.446 [2024-10-25 17:56:59.753640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.706 [2024-10-25 17:56:59.929875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.706 [2024-10-25 17:57:00.037877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.965 [2024-10-25 17:57:00.247131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.965 [2024-10-25 17:57:00.247180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 [2024-10-25 17:57:00.587905] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.224 [2024-10-25 17:57:00.587975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.224 [2024-10-25 17:57:00.587987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.224 [2024-10-25 17:57:00.587999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.224 [2024-10-25 17:57:00.588007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.224 [2024-10-25 17:57:00.588016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.224 [2024-10-25 17:57:00.588023] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.224 [2024-10-25 17:57:00.588033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.224 "name": "Existed_Raid", 00:15:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.224 "strip_size_kb": 64, 00:15:42.224 "state": "configuring", 00:15:42.224 "raid_level": "raid5f", 00:15:42.224 "superblock": false, 00:15:42.224 "num_base_bdevs": 4, 00:15:42.224 "num_base_bdevs_discovered": 0, 00:15:42.224 "num_base_bdevs_operational": 4, 00:15:42.224 "base_bdevs_list": [ 00:15:42.224 { 00:15:42.224 "name": "BaseBdev1", 00:15:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.224 "is_configured": false, 00:15:42.224 "data_offset": 0, 00:15:42.224 "data_size": 0 00:15:42.224 }, 00:15:42.224 { 00:15:42.224 "name": "BaseBdev2", 00:15:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.224 "is_configured": false, 00:15:42.224 "data_offset": 0, 00:15:42.224 "data_size": 0 00:15:42.224 }, 00:15:42.224 { 00:15:42.224 "name": "BaseBdev3", 00:15:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.224 "is_configured": false, 00:15:42.224 "data_offset": 0, 00:15:42.224 "data_size": 0 00:15:42.224 }, 00:15:42.224 { 00:15:42.224 "name": "BaseBdev4", 00:15:42.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.224 "is_configured": false, 00:15:42.224 "data_offset": 0, 00:15:42.224 "data_size": 0 00:15:42.224 } 00:15:42.224 ] 00:15:42.224 }' 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.224 17:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.792 [2024-10-25 17:57:01.035065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.792 [2024-10-25 17:57:01.035117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.792 [2024-10-25 17:57:01.047036] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.792 [2024-10-25 17:57:01.047090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.792 [2024-10-25 17:57:01.047099] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.792 [2024-10-25 17:57:01.047108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.792 [2024-10-25 17:57:01.047114] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.792 [2024-10-25 17:57:01.047123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.792 [2024-10-25 17:57:01.047129] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.792 [2024-10-25 17:57:01.047137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.792 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 [2024-10-25 17:57:01.091506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.793 BaseBdev1 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 [ 00:15:42.793 { 00:15:42.793 "name": "BaseBdev1", 00:15:42.793 "aliases": [ 00:15:42.793 "0d8c087a-ada7-43dd-bd55-7f00429e47b8" 00:15:42.793 ], 00:15:42.793 "product_name": "Malloc disk", 00:15:42.793 "block_size": 512, 00:15:42.793 "num_blocks": 65536, 00:15:42.793 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:42.793 "assigned_rate_limits": { 00:15:42.793 "rw_ios_per_sec": 0, 00:15:42.793 "rw_mbytes_per_sec": 0, 00:15:42.793 "r_mbytes_per_sec": 0, 00:15:42.793 "w_mbytes_per_sec": 0 00:15:42.793 }, 00:15:42.793 "claimed": true, 00:15:42.793 "claim_type": "exclusive_write", 00:15:42.793 "zoned": false, 00:15:42.793 "supported_io_types": { 00:15:42.793 "read": true, 00:15:42.793 "write": true, 00:15:42.793 "unmap": true, 00:15:42.793 "flush": true, 00:15:42.793 "reset": true, 00:15:42.793 "nvme_admin": false, 00:15:42.793 "nvme_io": false, 00:15:42.793 "nvme_io_md": false, 00:15:42.793 "write_zeroes": true, 00:15:42.793 "zcopy": true, 00:15:42.793 "get_zone_info": false, 00:15:42.793 "zone_management": false, 00:15:42.793 "zone_append": false, 00:15:42.793 "compare": false, 00:15:42.793 "compare_and_write": false, 00:15:42.793 "abort": true, 00:15:42.793 "seek_hole": false, 00:15:42.793 "seek_data": false, 00:15:42.793 "copy": true, 00:15:42.793 "nvme_iov_md": false 00:15:42.793 }, 00:15:42.793 "memory_domains": [ 00:15:42.793 { 00:15:42.793 "dma_device_id": "system", 00:15:42.793 "dma_device_type": 1 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.793 "dma_device_type": 2 00:15:42.793 } 00:15:42.793 ], 00:15:42.793 "driver_specific": {} 00:15:42.793 } 00:15:42.793 ] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.793 "name": "Existed_Raid", 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.793 "strip_size_kb": 64, 00:15:42.793 "state": "configuring", 00:15:42.793 "raid_level": "raid5f", 00:15:42.793 "superblock": false, 00:15:42.793 "num_base_bdevs": 4, 00:15:42.793 "num_base_bdevs_discovered": 1, 00:15:42.793 "num_base_bdevs_operational": 4, 00:15:42.793 "base_bdevs_list": [ 00:15:42.793 { 00:15:42.793 "name": "BaseBdev1", 00:15:42.793 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:42.793 "is_configured": true, 00:15:42.793 "data_offset": 0, 00:15:42.793 "data_size": 65536 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "name": "BaseBdev2", 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.793 "is_configured": false, 00:15:42.793 "data_offset": 0, 00:15:42.793 "data_size": 0 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "name": "BaseBdev3", 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.793 "is_configured": false, 00:15:42.793 "data_offset": 0, 00:15:42.793 "data_size": 0 00:15:42.793 }, 00:15:42.793 { 00:15:42.793 "name": "BaseBdev4", 00:15:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.793 "is_configured": false, 00:15:42.793 "data_offset": 0, 00:15:42.793 "data_size": 0 00:15:42.793 } 00:15:42.793 ] 00:15:42.793 }' 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.793 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.359 [2024-10-25 17:57:01.602718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.359 [2024-10-25 17:57:01.602779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.359 [2024-10-25 17:57:01.610756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.359 [2024-10-25 17:57:01.612642] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.359 [2024-10-25 17:57:01.612690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.359 [2024-10-25 17:57:01.612701] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.359 [2024-10-25 17:57:01.612728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.359 [2024-10-25 17:57:01.612736] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.359 [2024-10-25 17:57:01.612745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.359 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.360 "name": "Existed_Raid", 00:15:43.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.360 "strip_size_kb": 64, 00:15:43.360 "state": "configuring", 00:15:43.360 "raid_level": "raid5f", 00:15:43.360 "superblock": false, 00:15:43.360 "num_base_bdevs": 4, 00:15:43.360 "num_base_bdevs_discovered": 1, 00:15:43.360 "num_base_bdevs_operational": 4, 00:15:43.360 "base_bdevs_list": [ 00:15:43.360 { 00:15:43.360 "name": "BaseBdev1", 00:15:43.360 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:43.360 "is_configured": true, 00:15:43.360 "data_offset": 0, 00:15:43.360 "data_size": 65536 00:15:43.360 }, 00:15:43.360 { 00:15:43.360 "name": "BaseBdev2", 00:15:43.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.360 "is_configured": false, 00:15:43.360 "data_offset": 0, 00:15:43.360 "data_size": 0 00:15:43.360 }, 00:15:43.360 { 00:15:43.360 "name": "BaseBdev3", 00:15:43.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.360 "is_configured": false, 00:15:43.360 "data_offset": 0, 00:15:43.360 "data_size": 0 00:15:43.360 }, 00:15:43.360 { 00:15:43.360 "name": "BaseBdev4", 00:15:43.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.360 "is_configured": false, 00:15:43.360 "data_offset": 0, 00:15:43.360 "data_size": 0 00:15:43.360 } 00:15:43.360 ] 00:15:43.360 }' 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.360 17:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.627 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.627 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.627 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 [2024-10-25 17:57:02.086180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.896 BaseBdev2 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 [ 00:15:43.896 { 00:15:43.896 "name": "BaseBdev2", 00:15:43.896 "aliases": [ 00:15:43.896 "fe29c7eb-7d68-4cda-853b-0d6239eeb40f" 00:15:43.896 ], 00:15:43.896 "product_name": "Malloc disk", 00:15:43.896 "block_size": 512, 00:15:43.896 "num_blocks": 65536, 00:15:43.896 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:43.896 "assigned_rate_limits": { 00:15:43.896 "rw_ios_per_sec": 0, 00:15:43.896 "rw_mbytes_per_sec": 0, 00:15:43.896 "r_mbytes_per_sec": 0, 00:15:43.896 "w_mbytes_per_sec": 0 00:15:43.896 }, 00:15:43.896 "claimed": true, 00:15:43.896 "claim_type": "exclusive_write", 00:15:43.896 "zoned": false, 00:15:43.896 "supported_io_types": { 00:15:43.896 "read": true, 00:15:43.896 "write": true, 00:15:43.896 "unmap": true, 00:15:43.896 "flush": true, 00:15:43.896 "reset": true, 00:15:43.896 "nvme_admin": false, 00:15:43.896 "nvme_io": false, 00:15:43.896 "nvme_io_md": false, 00:15:43.896 "write_zeroes": true, 00:15:43.896 "zcopy": true, 00:15:43.896 "get_zone_info": false, 00:15:43.896 "zone_management": false, 00:15:43.896 "zone_append": false, 00:15:43.896 "compare": false, 00:15:43.896 "compare_and_write": false, 00:15:43.896 "abort": true, 00:15:43.896 "seek_hole": false, 00:15:43.896 "seek_data": false, 00:15:43.896 "copy": true, 00:15:43.896 "nvme_iov_md": false 00:15:43.896 }, 00:15:43.896 "memory_domains": [ 00:15:43.896 { 00:15:43.896 "dma_device_id": "system", 00:15:43.896 "dma_device_type": 1 00:15:43.896 }, 00:15:43.896 { 00:15:43.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.896 "dma_device_type": 2 00:15:43.896 } 00:15:43.896 ], 00:15:43.896 "driver_specific": {} 00:15:43.896 } 00:15:43.896 ] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.896 "name": "Existed_Raid", 00:15:43.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.896 "strip_size_kb": 64, 00:15:43.896 "state": "configuring", 00:15:43.896 "raid_level": "raid5f", 00:15:43.896 "superblock": false, 00:15:43.896 "num_base_bdevs": 4, 00:15:43.896 "num_base_bdevs_discovered": 2, 00:15:43.896 "num_base_bdevs_operational": 4, 00:15:43.896 "base_bdevs_list": [ 00:15:43.896 { 00:15:43.896 "name": "BaseBdev1", 00:15:43.896 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:43.896 "is_configured": true, 00:15:43.896 "data_offset": 0, 00:15:43.896 "data_size": 65536 00:15:43.896 }, 00:15:43.896 { 00:15:43.896 "name": "BaseBdev2", 00:15:43.896 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:43.896 "is_configured": true, 00:15:43.896 "data_offset": 0, 00:15:43.896 "data_size": 65536 00:15:43.896 }, 00:15:43.896 { 00:15:43.896 "name": "BaseBdev3", 00:15:43.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.896 "is_configured": false, 00:15:43.896 "data_offset": 0, 00:15:43.896 "data_size": 0 00:15:43.896 }, 00:15:43.896 { 00:15:43.896 "name": "BaseBdev4", 00:15:43.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.896 "is_configured": false, 00:15:43.896 "data_offset": 0, 00:15:43.896 "data_size": 0 00:15:43.896 } 00:15:43.896 ] 00:15:43.896 }' 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.896 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.155 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.155 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.155 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 [2024-10-25 17:57:02.633994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.415 BaseBdev3 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 [ 00:15:44.415 { 00:15:44.415 "name": "BaseBdev3", 00:15:44.415 "aliases": [ 00:15:44.415 "2a2ffe1c-9430-4ea7-b6da-db567ed38c62" 00:15:44.415 ], 00:15:44.415 "product_name": "Malloc disk", 00:15:44.415 "block_size": 512, 00:15:44.415 "num_blocks": 65536, 00:15:44.415 "uuid": "2a2ffe1c-9430-4ea7-b6da-db567ed38c62", 00:15:44.415 "assigned_rate_limits": { 00:15:44.415 "rw_ios_per_sec": 0, 00:15:44.415 "rw_mbytes_per_sec": 0, 00:15:44.415 "r_mbytes_per_sec": 0, 00:15:44.415 "w_mbytes_per_sec": 0 00:15:44.415 }, 00:15:44.415 "claimed": true, 00:15:44.415 "claim_type": "exclusive_write", 00:15:44.415 "zoned": false, 00:15:44.415 "supported_io_types": { 00:15:44.415 "read": true, 00:15:44.415 "write": true, 00:15:44.415 "unmap": true, 00:15:44.415 "flush": true, 00:15:44.415 "reset": true, 00:15:44.415 "nvme_admin": false, 00:15:44.415 "nvme_io": false, 00:15:44.415 "nvme_io_md": false, 00:15:44.415 "write_zeroes": true, 00:15:44.415 "zcopy": true, 00:15:44.415 "get_zone_info": false, 00:15:44.415 "zone_management": false, 00:15:44.415 "zone_append": false, 00:15:44.415 "compare": false, 00:15:44.415 "compare_and_write": false, 00:15:44.415 "abort": true, 00:15:44.415 "seek_hole": false, 00:15:44.415 "seek_data": false, 00:15:44.415 "copy": true, 00:15:44.415 "nvme_iov_md": false 00:15:44.415 }, 00:15:44.415 "memory_domains": [ 00:15:44.415 { 00:15:44.415 "dma_device_id": "system", 00:15:44.415 "dma_device_type": 1 00:15:44.415 }, 00:15:44.415 { 00:15:44.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.415 "dma_device_type": 2 00:15:44.415 } 00:15:44.415 ], 00:15:44.415 "driver_specific": {} 00:15:44.415 } 00:15:44.415 ] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.415 "name": "Existed_Raid", 00:15:44.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.415 "strip_size_kb": 64, 00:15:44.415 "state": "configuring", 00:15:44.415 "raid_level": "raid5f", 00:15:44.415 "superblock": false, 00:15:44.415 "num_base_bdevs": 4, 00:15:44.415 "num_base_bdevs_discovered": 3, 00:15:44.415 "num_base_bdevs_operational": 4, 00:15:44.415 "base_bdevs_list": [ 00:15:44.415 { 00:15:44.415 "name": "BaseBdev1", 00:15:44.415 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:44.415 "is_configured": true, 00:15:44.415 "data_offset": 0, 00:15:44.415 "data_size": 65536 00:15:44.415 }, 00:15:44.415 { 00:15:44.415 "name": "BaseBdev2", 00:15:44.415 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:44.415 "is_configured": true, 00:15:44.415 "data_offset": 0, 00:15:44.415 "data_size": 65536 00:15:44.415 }, 00:15:44.415 { 00:15:44.415 "name": "BaseBdev3", 00:15:44.415 "uuid": "2a2ffe1c-9430-4ea7-b6da-db567ed38c62", 00:15:44.415 "is_configured": true, 00:15:44.415 "data_offset": 0, 00:15:44.415 "data_size": 65536 00:15:44.415 }, 00:15:44.415 { 00:15:44.415 "name": "BaseBdev4", 00:15:44.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.415 "is_configured": false, 00:15:44.415 "data_offset": 0, 00:15:44.415 "data_size": 0 00:15:44.415 } 00:15:44.415 ] 00:15:44.415 }' 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.415 17:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.986 [2024-10-25 17:57:03.169206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.986 [2024-10-25 17:57:03.169274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.986 [2024-10-25 17:57:03.169284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:44.986 [2024-10-25 17:57:03.169540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:44.986 [2024-10-25 17:57:03.176775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.986 [2024-10-25 17:57:03.176799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:44.986 [2024-10-25 17:57:03.177058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.986 BaseBdev4 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.986 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.986 [ 00:15:44.986 { 00:15:44.986 "name": "BaseBdev4", 00:15:44.986 "aliases": [ 00:15:44.986 "b38fb202-ef37-4df2-afd3-b5b964a23022" 00:15:44.986 ], 00:15:44.986 "product_name": "Malloc disk", 00:15:44.986 "block_size": 512, 00:15:44.986 "num_blocks": 65536, 00:15:44.986 "uuid": "b38fb202-ef37-4df2-afd3-b5b964a23022", 00:15:44.986 "assigned_rate_limits": { 00:15:44.986 "rw_ios_per_sec": 0, 00:15:44.986 "rw_mbytes_per_sec": 0, 00:15:44.986 "r_mbytes_per_sec": 0, 00:15:44.986 "w_mbytes_per_sec": 0 00:15:44.986 }, 00:15:44.986 "claimed": true, 00:15:44.986 "claim_type": "exclusive_write", 00:15:44.986 "zoned": false, 00:15:44.986 "supported_io_types": { 00:15:44.986 "read": true, 00:15:44.986 "write": true, 00:15:44.986 "unmap": true, 00:15:44.986 "flush": true, 00:15:44.986 "reset": true, 00:15:44.986 "nvme_admin": false, 00:15:44.986 "nvme_io": false, 00:15:44.986 "nvme_io_md": false, 00:15:44.986 "write_zeroes": true, 00:15:44.986 "zcopy": true, 00:15:44.986 "get_zone_info": false, 00:15:44.986 "zone_management": false, 00:15:44.986 "zone_append": false, 00:15:44.986 "compare": false, 00:15:44.986 "compare_and_write": false, 00:15:44.986 "abort": true, 00:15:44.986 "seek_hole": false, 00:15:44.986 "seek_data": false, 00:15:44.986 "copy": true, 00:15:44.986 "nvme_iov_md": false 00:15:44.986 }, 00:15:44.986 "memory_domains": [ 00:15:44.986 { 00:15:44.986 "dma_device_id": "system", 00:15:44.987 "dma_device_type": 1 00:15:44.987 }, 00:15:44.987 { 00:15:44.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.987 "dma_device_type": 2 00:15:44.987 } 00:15:44.987 ], 00:15:44.987 "driver_specific": {} 00:15:44.987 } 00:15:44.987 ] 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.987 "name": "Existed_Raid", 00:15:44.987 "uuid": "c9978198-e9b3-4ad3-933c-08b48e679fa6", 00:15:44.987 "strip_size_kb": 64, 00:15:44.987 "state": "online", 00:15:44.987 "raid_level": "raid5f", 00:15:44.987 "superblock": false, 00:15:44.987 "num_base_bdevs": 4, 00:15:44.987 "num_base_bdevs_discovered": 4, 00:15:44.987 "num_base_bdevs_operational": 4, 00:15:44.987 "base_bdevs_list": [ 00:15:44.987 { 00:15:44.987 "name": "BaseBdev1", 00:15:44.987 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:44.987 "is_configured": true, 00:15:44.987 "data_offset": 0, 00:15:44.987 "data_size": 65536 00:15:44.987 }, 00:15:44.987 { 00:15:44.987 "name": "BaseBdev2", 00:15:44.987 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:44.987 "is_configured": true, 00:15:44.987 "data_offset": 0, 00:15:44.987 "data_size": 65536 00:15:44.987 }, 00:15:44.987 { 00:15:44.987 "name": "BaseBdev3", 00:15:44.987 "uuid": "2a2ffe1c-9430-4ea7-b6da-db567ed38c62", 00:15:44.987 "is_configured": true, 00:15:44.987 "data_offset": 0, 00:15:44.987 "data_size": 65536 00:15:44.987 }, 00:15:44.987 { 00:15:44.987 "name": "BaseBdev4", 00:15:44.987 "uuid": "b38fb202-ef37-4df2-afd3-b5b964a23022", 00:15:44.987 "is_configured": true, 00:15:44.987 "data_offset": 0, 00:15:44.987 "data_size": 65536 00:15:44.987 } 00:15:44.987 ] 00:15:44.987 }' 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.987 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.557 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.558 [2024-10-25 17:57:03.700588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.558 "name": "Existed_Raid", 00:15:45.558 "aliases": [ 00:15:45.558 "c9978198-e9b3-4ad3-933c-08b48e679fa6" 00:15:45.558 ], 00:15:45.558 "product_name": "Raid Volume", 00:15:45.558 "block_size": 512, 00:15:45.558 "num_blocks": 196608, 00:15:45.558 "uuid": "c9978198-e9b3-4ad3-933c-08b48e679fa6", 00:15:45.558 "assigned_rate_limits": { 00:15:45.558 "rw_ios_per_sec": 0, 00:15:45.558 "rw_mbytes_per_sec": 0, 00:15:45.558 "r_mbytes_per_sec": 0, 00:15:45.558 "w_mbytes_per_sec": 0 00:15:45.558 }, 00:15:45.558 "claimed": false, 00:15:45.558 "zoned": false, 00:15:45.558 "supported_io_types": { 00:15:45.558 "read": true, 00:15:45.558 "write": true, 00:15:45.558 "unmap": false, 00:15:45.558 "flush": false, 00:15:45.558 "reset": true, 00:15:45.558 "nvme_admin": false, 00:15:45.558 "nvme_io": false, 00:15:45.558 "nvme_io_md": false, 00:15:45.558 "write_zeroes": true, 00:15:45.558 "zcopy": false, 00:15:45.558 "get_zone_info": false, 00:15:45.558 "zone_management": false, 00:15:45.558 "zone_append": false, 00:15:45.558 "compare": false, 00:15:45.558 "compare_and_write": false, 00:15:45.558 "abort": false, 00:15:45.558 "seek_hole": false, 00:15:45.558 "seek_data": false, 00:15:45.558 "copy": false, 00:15:45.558 "nvme_iov_md": false 00:15:45.558 }, 00:15:45.558 "driver_specific": { 00:15:45.558 "raid": { 00:15:45.558 "uuid": "c9978198-e9b3-4ad3-933c-08b48e679fa6", 00:15:45.558 "strip_size_kb": 64, 00:15:45.558 "state": "online", 00:15:45.558 "raid_level": "raid5f", 00:15:45.558 "superblock": false, 00:15:45.558 "num_base_bdevs": 4, 00:15:45.558 "num_base_bdevs_discovered": 4, 00:15:45.558 "num_base_bdevs_operational": 4, 00:15:45.558 "base_bdevs_list": [ 00:15:45.558 { 00:15:45.558 "name": "BaseBdev1", 00:15:45.558 "uuid": "0d8c087a-ada7-43dd-bd55-7f00429e47b8", 00:15:45.558 "is_configured": true, 00:15:45.558 "data_offset": 0, 00:15:45.558 "data_size": 65536 00:15:45.558 }, 00:15:45.558 { 00:15:45.558 "name": "BaseBdev2", 00:15:45.558 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:45.558 "is_configured": true, 00:15:45.558 "data_offset": 0, 00:15:45.558 "data_size": 65536 00:15:45.558 }, 00:15:45.558 { 00:15:45.558 "name": "BaseBdev3", 00:15:45.558 "uuid": "2a2ffe1c-9430-4ea7-b6da-db567ed38c62", 00:15:45.558 "is_configured": true, 00:15:45.558 "data_offset": 0, 00:15:45.558 "data_size": 65536 00:15:45.558 }, 00:15:45.558 { 00:15:45.558 "name": "BaseBdev4", 00:15:45.558 "uuid": "b38fb202-ef37-4df2-afd3-b5b964a23022", 00:15:45.558 "is_configured": true, 00:15:45.558 "data_offset": 0, 00:15:45.558 "data_size": 65536 00:15:45.558 } 00:15:45.558 ] 00:15:45.558 } 00:15:45.558 } 00:15:45.558 }' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.558 BaseBdev2 00:15:45.558 BaseBdev3 00:15:45.558 BaseBdev4' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.558 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.819 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.819 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.819 17:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.819 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.819 17:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.819 [2024-10-25 17:57:03.999895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.819 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.820 "name": "Existed_Raid", 00:15:45.820 "uuid": "c9978198-e9b3-4ad3-933c-08b48e679fa6", 00:15:45.820 "strip_size_kb": 64, 00:15:45.820 "state": "online", 00:15:45.820 "raid_level": "raid5f", 00:15:45.820 "superblock": false, 00:15:45.820 "num_base_bdevs": 4, 00:15:45.820 "num_base_bdevs_discovered": 3, 00:15:45.820 "num_base_bdevs_operational": 3, 00:15:45.820 "base_bdevs_list": [ 00:15:45.820 { 00:15:45.820 "name": null, 00:15:45.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.820 "is_configured": false, 00:15:45.820 "data_offset": 0, 00:15:45.820 "data_size": 65536 00:15:45.820 }, 00:15:45.820 { 00:15:45.820 "name": "BaseBdev2", 00:15:45.820 "uuid": "fe29c7eb-7d68-4cda-853b-0d6239eeb40f", 00:15:45.820 "is_configured": true, 00:15:45.820 "data_offset": 0, 00:15:45.820 "data_size": 65536 00:15:45.820 }, 00:15:45.820 { 00:15:45.820 "name": "BaseBdev3", 00:15:45.820 "uuid": "2a2ffe1c-9430-4ea7-b6da-db567ed38c62", 00:15:45.820 "is_configured": true, 00:15:45.820 "data_offset": 0, 00:15:45.820 "data_size": 65536 00:15:45.820 }, 00:15:45.820 { 00:15:45.820 "name": "BaseBdev4", 00:15:45.820 "uuid": "b38fb202-ef37-4df2-afd3-b5b964a23022", 00:15:45.820 "is_configured": true, 00:15:45.820 "data_offset": 0, 00:15:45.820 "data_size": 65536 00:15:45.820 } 00:15:45.820 ] 00:15:45.820 }' 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.820 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.390 [2024-10-25 17:57:04.638481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.390 [2024-10-25 17:57:04.638646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.390 [2024-10-25 17:57:04.733340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.390 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.390 [2024-10-25 17:57:04.785280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.649 17:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.649 [2024-10-25 17:57:04.948130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:46.649 [2024-10-25 17:57:04.948185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.649 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 BaseBdev2 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 [ 00:15:46.909 { 00:15:46.909 "name": "BaseBdev2", 00:15:46.909 "aliases": [ 00:15:46.909 "49b903f6-cdea-4b68-a1d8-3078097007de" 00:15:46.909 ], 00:15:46.909 "product_name": "Malloc disk", 00:15:46.909 "block_size": 512, 00:15:46.909 "num_blocks": 65536, 00:15:46.909 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:46.909 "assigned_rate_limits": { 00:15:46.909 "rw_ios_per_sec": 0, 00:15:46.909 "rw_mbytes_per_sec": 0, 00:15:46.909 "r_mbytes_per_sec": 0, 00:15:46.909 "w_mbytes_per_sec": 0 00:15:46.909 }, 00:15:46.909 "claimed": false, 00:15:46.909 "zoned": false, 00:15:46.909 "supported_io_types": { 00:15:46.909 "read": true, 00:15:46.909 "write": true, 00:15:46.909 "unmap": true, 00:15:46.909 "flush": true, 00:15:46.909 "reset": true, 00:15:46.909 "nvme_admin": false, 00:15:46.909 "nvme_io": false, 00:15:46.909 "nvme_io_md": false, 00:15:46.909 "write_zeroes": true, 00:15:46.909 "zcopy": true, 00:15:46.909 "get_zone_info": false, 00:15:46.909 "zone_management": false, 00:15:46.909 "zone_append": false, 00:15:46.909 "compare": false, 00:15:46.909 "compare_and_write": false, 00:15:46.909 "abort": true, 00:15:46.909 "seek_hole": false, 00:15:46.909 "seek_data": false, 00:15:46.909 "copy": true, 00:15:46.909 "nvme_iov_md": false 00:15:46.909 }, 00:15:46.909 "memory_domains": [ 00:15:46.909 { 00:15:46.909 "dma_device_id": "system", 00:15:46.909 "dma_device_type": 1 00:15:46.909 }, 00:15:46.909 { 00:15:46.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.909 "dma_device_type": 2 00:15:46.909 } 00:15:46.909 ], 00:15:46.909 "driver_specific": {} 00:15:46.909 } 00:15:46.909 ] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 BaseBdev3 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.909 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 [ 00:15:46.910 { 00:15:46.910 "name": "BaseBdev3", 00:15:46.910 "aliases": [ 00:15:46.910 "1b0c5c8f-02bc-4d16-8592-c523f118f240" 00:15:46.910 ], 00:15:46.910 "product_name": "Malloc disk", 00:15:46.910 "block_size": 512, 00:15:46.910 "num_blocks": 65536, 00:15:46.910 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:46.910 "assigned_rate_limits": { 00:15:46.910 "rw_ios_per_sec": 0, 00:15:46.910 "rw_mbytes_per_sec": 0, 00:15:46.910 "r_mbytes_per_sec": 0, 00:15:46.910 "w_mbytes_per_sec": 0 00:15:46.910 }, 00:15:46.910 "claimed": false, 00:15:46.910 "zoned": false, 00:15:46.910 "supported_io_types": { 00:15:46.910 "read": true, 00:15:46.910 "write": true, 00:15:46.910 "unmap": true, 00:15:46.910 "flush": true, 00:15:46.910 "reset": true, 00:15:46.910 "nvme_admin": false, 00:15:46.910 "nvme_io": false, 00:15:46.910 "nvme_io_md": false, 00:15:46.910 "write_zeroes": true, 00:15:46.910 "zcopy": true, 00:15:46.910 "get_zone_info": false, 00:15:46.910 "zone_management": false, 00:15:46.910 "zone_append": false, 00:15:46.910 "compare": false, 00:15:46.910 "compare_and_write": false, 00:15:46.910 "abort": true, 00:15:46.910 "seek_hole": false, 00:15:46.910 "seek_data": false, 00:15:46.910 "copy": true, 00:15:46.910 "nvme_iov_md": false 00:15:46.910 }, 00:15:46.910 "memory_domains": [ 00:15:46.910 { 00:15:46.910 "dma_device_id": "system", 00:15:46.910 "dma_device_type": 1 00:15:46.910 }, 00:15:46.910 { 00:15:46.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.910 "dma_device_type": 2 00:15:46.910 } 00:15:46.910 ], 00:15:46.910 "driver_specific": {} 00:15:46.910 } 00:15:46.910 ] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 BaseBdev4 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 [ 00:15:46.910 { 00:15:46.910 "name": "BaseBdev4", 00:15:46.910 "aliases": [ 00:15:46.910 "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b" 00:15:46.910 ], 00:15:46.910 "product_name": "Malloc disk", 00:15:46.910 "block_size": 512, 00:15:46.910 "num_blocks": 65536, 00:15:46.910 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:46.910 "assigned_rate_limits": { 00:15:46.910 "rw_ios_per_sec": 0, 00:15:46.910 "rw_mbytes_per_sec": 0, 00:15:46.910 "r_mbytes_per_sec": 0, 00:15:46.910 "w_mbytes_per_sec": 0 00:15:46.910 }, 00:15:46.910 "claimed": false, 00:15:46.910 "zoned": false, 00:15:46.910 "supported_io_types": { 00:15:46.910 "read": true, 00:15:46.910 "write": true, 00:15:46.910 "unmap": true, 00:15:46.910 "flush": true, 00:15:46.910 "reset": true, 00:15:46.910 "nvme_admin": false, 00:15:46.910 "nvme_io": false, 00:15:46.910 "nvme_io_md": false, 00:15:46.910 "write_zeroes": true, 00:15:46.910 "zcopy": true, 00:15:46.910 "get_zone_info": false, 00:15:46.910 "zone_management": false, 00:15:46.910 "zone_append": false, 00:15:46.910 "compare": false, 00:15:46.910 "compare_and_write": false, 00:15:46.910 "abort": true, 00:15:46.910 "seek_hole": false, 00:15:46.910 "seek_data": false, 00:15:46.910 "copy": true, 00:15:46.910 "nvme_iov_md": false 00:15:46.910 }, 00:15:46.910 "memory_domains": [ 00:15:46.910 { 00:15:46.910 "dma_device_id": "system", 00:15:46.910 "dma_device_type": 1 00:15:46.910 }, 00:15:46.910 { 00:15:46.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.910 "dma_device_type": 2 00:15:46.910 } 00:15:46.910 ], 00:15:46.910 "driver_specific": {} 00:15:46.910 } 00:15:46.910 ] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.910 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 [2024-10-25 17:57:05.347570] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.169 [2024-10-25 17:57:05.347666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.169 [2024-10-25 17:57:05.347715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.169 [2024-10-25 17:57:05.349718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.169 [2024-10-25 17:57:05.349834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.169 "name": "Existed_Raid", 00:15:47.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.169 "strip_size_kb": 64, 00:15:47.169 "state": "configuring", 00:15:47.169 "raid_level": "raid5f", 00:15:47.169 "superblock": false, 00:15:47.169 "num_base_bdevs": 4, 00:15:47.169 "num_base_bdevs_discovered": 3, 00:15:47.169 "num_base_bdevs_operational": 4, 00:15:47.169 "base_bdevs_list": [ 00:15:47.169 { 00:15:47.169 "name": "BaseBdev1", 00:15:47.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.169 "is_configured": false, 00:15:47.169 "data_offset": 0, 00:15:47.169 "data_size": 0 00:15:47.169 }, 00:15:47.169 { 00:15:47.169 "name": "BaseBdev2", 00:15:47.169 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:47.169 "is_configured": true, 00:15:47.169 "data_offset": 0, 00:15:47.169 "data_size": 65536 00:15:47.169 }, 00:15:47.169 { 00:15:47.169 "name": "BaseBdev3", 00:15:47.169 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:47.169 "is_configured": true, 00:15:47.169 "data_offset": 0, 00:15:47.169 "data_size": 65536 00:15:47.169 }, 00:15:47.169 { 00:15:47.169 "name": "BaseBdev4", 00:15:47.169 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:47.169 "is_configured": true, 00:15:47.169 "data_offset": 0, 00:15:47.169 "data_size": 65536 00:15:47.169 } 00:15:47.169 ] 00:15:47.169 }' 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.169 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 [2024-10-25 17:57:05.814770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.428 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.429 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.688 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.688 "name": "Existed_Raid", 00:15:47.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.688 "strip_size_kb": 64, 00:15:47.688 "state": "configuring", 00:15:47.688 "raid_level": "raid5f", 00:15:47.688 "superblock": false, 00:15:47.688 "num_base_bdevs": 4, 00:15:47.688 "num_base_bdevs_discovered": 2, 00:15:47.688 "num_base_bdevs_operational": 4, 00:15:47.688 "base_bdevs_list": [ 00:15:47.688 { 00:15:47.688 "name": "BaseBdev1", 00:15:47.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.688 "is_configured": false, 00:15:47.688 "data_offset": 0, 00:15:47.688 "data_size": 0 00:15:47.688 }, 00:15:47.688 { 00:15:47.688 "name": null, 00:15:47.688 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:47.688 "is_configured": false, 00:15:47.688 "data_offset": 0, 00:15:47.688 "data_size": 65536 00:15:47.688 }, 00:15:47.688 { 00:15:47.688 "name": "BaseBdev3", 00:15:47.688 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:47.688 "is_configured": true, 00:15:47.688 "data_offset": 0, 00:15:47.688 "data_size": 65536 00:15:47.688 }, 00:15:47.688 { 00:15:47.688 "name": "BaseBdev4", 00:15:47.688 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:47.688 "is_configured": true, 00:15:47.688 "data_offset": 0, 00:15:47.688 "data_size": 65536 00:15:47.688 } 00:15:47.688 ] 00:15:47.688 }' 00:15:47.688 17:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.688 17:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.948 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.209 [2024-10-25 17:57:06.386143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.209 BaseBdev1 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.209 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.209 [ 00:15:48.209 { 00:15:48.209 "name": "BaseBdev1", 00:15:48.209 "aliases": [ 00:15:48.209 "fc014e54-c2f0-4783-ad47-88afac09518a" 00:15:48.209 ], 00:15:48.209 "product_name": "Malloc disk", 00:15:48.209 "block_size": 512, 00:15:48.209 "num_blocks": 65536, 00:15:48.209 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:48.209 "assigned_rate_limits": { 00:15:48.209 "rw_ios_per_sec": 0, 00:15:48.209 "rw_mbytes_per_sec": 0, 00:15:48.209 "r_mbytes_per_sec": 0, 00:15:48.209 "w_mbytes_per_sec": 0 00:15:48.209 }, 00:15:48.209 "claimed": true, 00:15:48.209 "claim_type": "exclusive_write", 00:15:48.209 "zoned": false, 00:15:48.210 "supported_io_types": { 00:15:48.210 "read": true, 00:15:48.210 "write": true, 00:15:48.210 "unmap": true, 00:15:48.210 "flush": true, 00:15:48.210 "reset": true, 00:15:48.210 "nvme_admin": false, 00:15:48.210 "nvme_io": false, 00:15:48.210 "nvme_io_md": false, 00:15:48.210 "write_zeroes": true, 00:15:48.210 "zcopy": true, 00:15:48.210 "get_zone_info": false, 00:15:48.210 "zone_management": false, 00:15:48.210 "zone_append": false, 00:15:48.210 "compare": false, 00:15:48.210 "compare_and_write": false, 00:15:48.210 "abort": true, 00:15:48.210 "seek_hole": false, 00:15:48.210 "seek_data": false, 00:15:48.210 "copy": true, 00:15:48.210 "nvme_iov_md": false 00:15:48.210 }, 00:15:48.210 "memory_domains": [ 00:15:48.210 { 00:15:48.210 "dma_device_id": "system", 00:15:48.210 "dma_device_type": 1 00:15:48.210 }, 00:15:48.210 { 00:15:48.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.210 "dma_device_type": 2 00:15:48.210 } 00:15:48.210 ], 00:15:48.210 "driver_specific": {} 00:15:48.210 } 00:15:48.210 ] 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.210 "name": "Existed_Raid", 00:15:48.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.210 "strip_size_kb": 64, 00:15:48.210 "state": "configuring", 00:15:48.210 "raid_level": "raid5f", 00:15:48.210 "superblock": false, 00:15:48.210 "num_base_bdevs": 4, 00:15:48.210 "num_base_bdevs_discovered": 3, 00:15:48.210 "num_base_bdevs_operational": 4, 00:15:48.210 "base_bdevs_list": [ 00:15:48.210 { 00:15:48.210 "name": "BaseBdev1", 00:15:48.210 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:48.210 "is_configured": true, 00:15:48.210 "data_offset": 0, 00:15:48.210 "data_size": 65536 00:15:48.210 }, 00:15:48.210 { 00:15:48.210 "name": null, 00:15:48.210 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:48.210 "is_configured": false, 00:15:48.210 "data_offset": 0, 00:15:48.210 "data_size": 65536 00:15:48.210 }, 00:15:48.210 { 00:15:48.210 "name": "BaseBdev3", 00:15:48.210 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:48.210 "is_configured": true, 00:15:48.210 "data_offset": 0, 00:15:48.210 "data_size": 65536 00:15:48.210 }, 00:15:48.210 { 00:15:48.210 "name": "BaseBdev4", 00:15:48.210 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:48.210 "is_configured": true, 00:15:48.210 "data_offset": 0, 00:15:48.210 "data_size": 65536 00:15:48.210 } 00:15:48.210 ] 00:15:48.210 }' 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.210 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.469 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.469 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.469 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.469 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.469 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.727 [2024-10-25 17:57:06.933297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.727 "name": "Existed_Raid", 00:15:48.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.727 "strip_size_kb": 64, 00:15:48.727 "state": "configuring", 00:15:48.727 "raid_level": "raid5f", 00:15:48.727 "superblock": false, 00:15:48.727 "num_base_bdevs": 4, 00:15:48.727 "num_base_bdevs_discovered": 2, 00:15:48.727 "num_base_bdevs_operational": 4, 00:15:48.727 "base_bdevs_list": [ 00:15:48.727 { 00:15:48.727 "name": "BaseBdev1", 00:15:48.727 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:48.727 "is_configured": true, 00:15:48.727 "data_offset": 0, 00:15:48.727 "data_size": 65536 00:15:48.727 }, 00:15:48.727 { 00:15:48.727 "name": null, 00:15:48.727 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:48.727 "is_configured": false, 00:15:48.727 "data_offset": 0, 00:15:48.727 "data_size": 65536 00:15:48.727 }, 00:15:48.727 { 00:15:48.727 "name": null, 00:15:48.727 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:48.727 "is_configured": false, 00:15:48.727 "data_offset": 0, 00:15:48.727 "data_size": 65536 00:15:48.727 }, 00:15:48.727 { 00:15:48.727 "name": "BaseBdev4", 00:15:48.727 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:48.727 "is_configured": true, 00:15:48.727 "data_offset": 0, 00:15:48.727 "data_size": 65536 00:15:48.727 } 00:15:48.727 ] 00:15:48.727 }' 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.727 17:57:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.985 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.985 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.985 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.985 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 [2024-10-25 17:57:07.468377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.243 "name": "Existed_Raid", 00:15:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.243 "strip_size_kb": 64, 00:15:49.243 "state": "configuring", 00:15:49.243 "raid_level": "raid5f", 00:15:49.243 "superblock": false, 00:15:49.243 "num_base_bdevs": 4, 00:15:49.243 "num_base_bdevs_discovered": 3, 00:15:49.243 "num_base_bdevs_operational": 4, 00:15:49.243 "base_bdevs_list": [ 00:15:49.243 { 00:15:49.243 "name": "BaseBdev1", 00:15:49.243 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:49.243 "is_configured": true, 00:15:49.243 "data_offset": 0, 00:15:49.243 "data_size": 65536 00:15:49.243 }, 00:15:49.243 { 00:15:49.243 "name": null, 00:15:49.243 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:49.243 "is_configured": false, 00:15:49.243 "data_offset": 0, 00:15:49.243 "data_size": 65536 00:15:49.243 }, 00:15:49.243 { 00:15:49.243 "name": "BaseBdev3", 00:15:49.243 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:49.243 "is_configured": true, 00:15:49.243 "data_offset": 0, 00:15:49.243 "data_size": 65536 00:15:49.243 }, 00:15:49.243 { 00:15:49.243 "name": "BaseBdev4", 00:15:49.243 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:49.243 "is_configured": true, 00:15:49.243 "data_offset": 0, 00:15:49.243 "data_size": 65536 00:15:49.243 } 00:15:49.243 ] 00:15:49.243 }' 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.243 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.503 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.503 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.503 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.763 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:49.763 17:57:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:49.763 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.763 17:57:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 [2024-10-25 17:57:07.955559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.763 "name": "Existed_Raid", 00:15:49.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.763 "strip_size_kb": 64, 00:15:49.763 "state": "configuring", 00:15:49.763 "raid_level": "raid5f", 00:15:49.763 "superblock": false, 00:15:49.763 "num_base_bdevs": 4, 00:15:49.763 "num_base_bdevs_discovered": 2, 00:15:49.763 "num_base_bdevs_operational": 4, 00:15:49.763 "base_bdevs_list": [ 00:15:49.763 { 00:15:49.763 "name": null, 00:15:49.763 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:49.763 "is_configured": false, 00:15:49.763 "data_offset": 0, 00:15:49.763 "data_size": 65536 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": null, 00:15:49.763 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:49.763 "is_configured": false, 00:15:49.763 "data_offset": 0, 00:15:49.763 "data_size": 65536 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": "BaseBdev3", 00:15:49.763 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:49.763 "is_configured": true, 00:15:49.763 "data_offset": 0, 00:15:49.763 "data_size": 65536 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "name": "BaseBdev4", 00:15:49.763 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:49.763 "is_configured": true, 00:15:49.763 "data_offset": 0, 00:15:49.763 "data_size": 65536 00:15:49.763 } 00:15:49.763 ] 00:15:49.763 }' 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.763 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.331 [2024-10-25 17:57:08.560689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.331 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.331 "name": "Existed_Raid", 00:15:50.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.331 "strip_size_kb": 64, 00:15:50.331 "state": "configuring", 00:15:50.331 "raid_level": "raid5f", 00:15:50.331 "superblock": false, 00:15:50.332 "num_base_bdevs": 4, 00:15:50.332 "num_base_bdevs_discovered": 3, 00:15:50.332 "num_base_bdevs_operational": 4, 00:15:50.332 "base_bdevs_list": [ 00:15:50.332 { 00:15:50.332 "name": null, 00:15:50.332 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:50.332 "is_configured": false, 00:15:50.332 "data_offset": 0, 00:15:50.332 "data_size": 65536 00:15:50.332 }, 00:15:50.332 { 00:15:50.332 "name": "BaseBdev2", 00:15:50.332 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:50.332 "is_configured": true, 00:15:50.332 "data_offset": 0, 00:15:50.332 "data_size": 65536 00:15:50.332 }, 00:15:50.332 { 00:15:50.332 "name": "BaseBdev3", 00:15:50.332 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:50.332 "is_configured": true, 00:15:50.332 "data_offset": 0, 00:15:50.332 "data_size": 65536 00:15:50.332 }, 00:15:50.332 { 00:15:50.332 "name": "BaseBdev4", 00:15:50.332 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:50.332 "is_configured": true, 00:15:50.332 "data_offset": 0, 00:15:50.332 "data_size": 65536 00:15:50.332 } 00:15:50.332 ] 00:15:50.332 }' 00:15:50.332 17:57:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.332 17:57:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.590 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.590 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.590 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.590 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc014e54-c2f0-4783-ad47-88afac09518a 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 [2024-10-25 17:57:09.152450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:50.848 [2024-10-25 17:57:09.152508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.848 [2024-10-25 17:57:09.152516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:50.848 [2024-10-25 17:57:09.152759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:50.848 [2024-10-25 17:57:09.159742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.848 [2024-10-25 17:57:09.159766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:50.848 [2024-10-25 17:57:09.160064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.848 NewBaseBdev 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 [ 00:15:50.848 { 00:15:50.848 "name": "NewBaseBdev", 00:15:50.848 "aliases": [ 00:15:50.849 "fc014e54-c2f0-4783-ad47-88afac09518a" 00:15:50.849 ], 00:15:50.849 "product_name": "Malloc disk", 00:15:50.849 "block_size": 512, 00:15:50.849 "num_blocks": 65536, 00:15:50.849 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:50.849 "assigned_rate_limits": { 00:15:50.849 "rw_ios_per_sec": 0, 00:15:50.849 "rw_mbytes_per_sec": 0, 00:15:50.849 "r_mbytes_per_sec": 0, 00:15:50.849 "w_mbytes_per_sec": 0 00:15:50.849 }, 00:15:50.849 "claimed": true, 00:15:50.849 "claim_type": "exclusive_write", 00:15:50.849 "zoned": false, 00:15:50.849 "supported_io_types": { 00:15:50.849 "read": true, 00:15:50.849 "write": true, 00:15:50.849 "unmap": true, 00:15:50.849 "flush": true, 00:15:50.849 "reset": true, 00:15:50.849 "nvme_admin": false, 00:15:50.849 "nvme_io": false, 00:15:50.849 "nvme_io_md": false, 00:15:50.849 "write_zeroes": true, 00:15:50.849 "zcopy": true, 00:15:50.849 "get_zone_info": false, 00:15:50.849 "zone_management": false, 00:15:50.849 "zone_append": false, 00:15:50.849 "compare": false, 00:15:50.849 "compare_and_write": false, 00:15:50.849 "abort": true, 00:15:50.849 "seek_hole": false, 00:15:50.849 "seek_data": false, 00:15:50.849 "copy": true, 00:15:50.849 "nvme_iov_md": false 00:15:50.849 }, 00:15:50.849 "memory_domains": [ 00:15:50.849 { 00:15:50.849 "dma_device_id": "system", 00:15:50.849 "dma_device_type": 1 00:15:50.849 }, 00:15:50.849 { 00:15:50.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.849 "dma_device_type": 2 00:15:50.849 } 00:15:50.849 ], 00:15:50.849 "driver_specific": {} 00:15:50.849 } 00:15:50.849 ] 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.849 "name": "Existed_Raid", 00:15:50.849 "uuid": "de0f9e73-b65c-4144-a554-486fcfcc1d6d", 00:15:50.849 "strip_size_kb": 64, 00:15:50.849 "state": "online", 00:15:50.849 "raid_level": "raid5f", 00:15:50.849 "superblock": false, 00:15:50.849 "num_base_bdevs": 4, 00:15:50.849 "num_base_bdevs_discovered": 4, 00:15:50.849 "num_base_bdevs_operational": 4, 00:15:50.849 "base_bdevs_list": [ 00:15:50.849 { 00:15:50.849 "name": "NewBaseBdev", 00:15:50.849 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:50.849 "is_configured": true, 00:15:50.849 "data_offset": 0, 00:15:50.849 "data_size": 65536 00:15:50.849 }, 00:15:50.849 { 00:15:50.849 "name": "BaseBdev2", 00:15:50.849 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:50.849 "is_configured": true, 00:15:50.849 "data_offset": 0, 00:15:50.849 "data_size": 65536 00:15:50.849 }, 00:15:50.849 { 00:15:50.849 "name": "BaseBdev3", 00:15:50.849 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:50.849 "is_configured": true, 00:15:50.849 "data_offset": 0, 00:15:50.849 "data_size": 65536 00:15:50.849 }, 00:15:50.849 { 00:15:50.849 "name": "BaseBdev4", 00:15:50.849 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:50.849 "is_configured": true, 00:15:50.849 "data_offset": 0, 00:15:50.849 "data_size": 65536 00:15:50.849 } 00:15:50.849 ] 00:15:50.849 }' 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.849 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.415 [2024-10-25 17:57:09.683906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.415 "name": "Existed_Raid", 00:15:51.415 "aliases": [ 00:15:51.415 "de0f9e73-b65c-4144-a554-486fcfcc1d6d" 00:15:51.415 ], 00:15:51.415 "product_name": "Raid Volume", 00:15:51.415 "block_size": 512, 00:15:51.415 "num_blocks": 196608, 00:15:51.415 "uuid": "de0f9e73-b65c-4144-a554-486fcfcc1d6d", 00:15:51.415 "assigned_rate_limits": { 00:15:51.415 "rw_ios_per_sec": 0, 00:15:51.415 "rw_mbytes_per_sec": 0, 00:15:51.415 "r_mbytes_per_sec": 0, 00:15:51.415 "w_mbytes_per_sec": 0 00:15:51.415 }, 00:15:51.415 "claimed": false, 00:15:51.415 "zoned": false, 00:15:51.415 "supported_io_types": { 00:15:51.415 "read": true, 00:15:51.415 "write": true, 00:15:51.415 "unmap": false, 00:15:51.415 "flush": false, 00:15:51.415 "reset": true, 00:15:51.415 "nvme_admin": false, 00:15:51.415 "nvme_io": false, 00:15:51.415 "nvme_io_md": false, 00:15:51.415 "write_zeroes": true, 00:15:51.415 "zcopy": false, 00:15:51.415 "get_zone_info": false, 00:15:51.415 "zone_management": false, 00:15:51.415 "zone_append": false, 00:15:51.415 "compare": false, 00:15:51.415 "compare_and_write": false, 00:15:51.415 "abort": false, 00:15:51.415 "seek_hole": false, 00:15:51.415 "seek_data": false, 00:15:51.415 "copy": false, 00:15:51.415 "nvme_iov_md": false 00:15:51.415 }, 00:15:51.415 "driver_specific": { 00:15:51.415 "raid": { 00:15:51.415 "uuid": "de0f9e73-b65c-4144-a554-486fcfcc1d6d", 00:15:51.415 "strip_size_kb": 64, 00:15:51.415 "state": "online", 00:15:51.415 "raid_level": "raid5f", 00:15:51.415 "superblock": false, 00:15:51.415 "num_base_bdevs": 4, 00:15:51.415 "num_base_bdevs_discovered": 4, 00:15:51.415 "num_base_bdevs_operational": 4, 00:15:51.415 "base_bdevs_list": [ 00:15:51.415 { 00:15:51.415 "name": "NewBaseBdev", 00:15:51.415 "uuid": "fc014e54-c2f0-4783-ad47-88afac09518a", 00:15:51.415 "is_configured": true, 00:15:51.415 "data_offset": 0, 00:15:51.415 "data_size": 65536 00:15:51.415 }, 00:15:51.415 { 00:15:51.415 "name": "BaseBdev2", 00:15:51.415 "uuid": "49b903f6-cdea-4b68-a1d8-3078097007de", 00:15:51.415 "is_configured": true, 00:15:51.415 "data_offset": 0, 00:15:51.415 "data_size": 65536 00:15:51.415 }, 00:15:51.415 { 00:15:51.415 "name": "BaseBdev3", 00:15:51.415 "uuid": "1b0c5c8f-02bc-4d16-8592-c523f118f240", 00:15:51.415 "is_configured": true, 00:15:51.415 "data_offset": 0, 00:15:51.415 "data_size": 65536 00:15:51.415 }, 00:15:51.415 { 00:15:51.415 "name": "BaseBdev4", 00:15:51.415 "uuid": "8caf4bab-4a90-4c0f-9f67-30c8f8028e7b", 00:15:51.415 "is_configured": true, 00:15:51.415 "data_offset": 0, 00:15:51.415 "data_size": 65536 00:15:51.415 } 00:15:51.415 ] 00:15:51.415 } 00:15:51.415 } 00:15:51.415 }' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:51.415 BaseBdev2 00:15:51.415 BaseBdev3 00:15:51.415 BaseBdev4' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.415 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.674 17:57:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.674 [2024-10-25 17:57:10.031005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.674 [2024-10-25 17:57:10.031034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.674 [2024-10-25 17:57:10.031106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.674 [2024-10-25 17:57:10.031406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.674 [2024-10-25 17:57:10.031417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82669 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82669 ']' 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82669 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82669 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.674 killing process with pid 82669 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82669' 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82669 00:15:51.674 [2024-10-25 17:57:10.077558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.674 17:57:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82669 00:15:52.241 [2024-10-25 17:57:10.475254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.175 17:57:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.175 00:15:53.175 real 0m11.936s 00:15:53.175 user 0m19.018s 00:15:53.175 sys 0m2.195s 00:15:53.175 17:57:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.175 17:57:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.175 ************************************ 00:15:53.175 END TEST raid5f_state_function_test 00:15:53.175 ************************************ 00:15:53.433 17:57:11 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:53.433 17:57:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:53.433 17:57:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.433 17:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.433 ************************************ 00:15:53.433 START TEST raid5f_state_function_test_sb 00:15:53.433 ************************************ 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:53.433 Process raid pid: 83346 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83346 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83346' 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83346 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83346 ']' 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.433 17:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.433 [2024-10-25 17:57:11.762131] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:53.433 [2024-10-25 17:57:11.762333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.690 [2024-10-25 17:57:11.932138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.690 [2024-10-25 17:57:12.052430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.949 [2024-10-25 17:57:12.269389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.949 [2024-10-25 17:57:12.269528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.209 [2024-10-25 17:57:12.605250] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.209 [2024-10-25 17:57:12.605307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.209 [2024-10-25 17:57:12.605323] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.209 [2024-10-25 17:57:12.605333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.209 [2024-10-25 17:57:12.605339] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.209 [2024-10-25 17:57:12.605348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.209 [2024-10-25 17:57:12.605355] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.209 [2024-10-25 17:57:12.605364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.209 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.468 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.469 "name": "Existed_Raid", 00:15:54.469 "uuid": "9e4fcdcd-562c-465d-8da6-73ebcf93978c", 00:15:54.469 "strip_size_kb": 64, 00:15:54.469 "state": "configuring", 00:15:54.469 "raid_level": "raid5f", 00:15:54.469 "superblock": true, 00:15:54.469 "num_base_bdevs": 4, 00:15:54.469 "num_base_bdevs_discovered": 0, 00:15:54.469 "num_base_bdevs_operational": 4, 00:15:54.469 "base_bdevs_list": [ 00:15:54.469 { 00:15:54.469 "name": "BaseBdev1", 00:15:54.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.469 "is_configured": false, 00:15:54.469 "data_offset": 0, 00:15:54.469 "data_size": 0 00:15:54.469 }, 00:15:54.469 { 00:15:54.469 "name": "BaseBdev2", 00:15:54.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.469 "is_configured": false, 00:15:54.469 "data_offset": 0, 00:15:54.469 "data_size": 0 00:15:54.469 }, 00:15:54.469 { 00:15:54.469 "name": "BaseBdev3", 00:15:54.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.469 "is_configured": false, 00:15:54.469 "data_offset": 0, 00:15:54.469 "data_size": 0 00:15:54.469 }, 00:15:54.469 { 00:15:54.469 "name": "BaseBdev4", 00:15:54.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.469 "is_configured": false, 00:15:54.469 "data_offset": 0, 00:15:54.469 "data_size": 0 00:15:54.469 } 00:15:54.469 ] 00:15:54.469 }' 00:15:54.469 17:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.469 17:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.728 [2024-10-25 17:57:13.032477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.728 [2024-10-25 17:57:13.032601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.728 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.728 [2024-10-25 17:57:13.044473] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.728 [2024-10-25 17:57:13.044570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.728 [2024-10-25 17:57:13.044600] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.728 [2024-10-25 17:57:13.044638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.729 [2024-10-25 17:57:13.044670] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.729 [2024-10-25 17:57:13.044693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.729 [2024-10-25 17:57:13.044742] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.729 [2024-10-25 17:57:13.044765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.729 [2024-10-25 17:57:13.090934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.729 BaseBdev1 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.729 [ 00:15:54.729 { 00:15:54.729 "name": "BaseBdev1", 00:15:54.729 "aliases": [ 00:15:54.729 "2576c489-783c-4ace-a47b-fc4fb338f21d" 00:15:54.729 ], 00:15:54.729 "product_name": "Malloc disk", 00:15:54.729 "block_size": 512, 00:15:54.729 "num_blocks": 65536, 00:15:54.729 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:54.729 "assigned_rate_limits": { 00:15:54.729 "rw_ios_per_sec": 0, 00:15:54.729 "rw_mbytes_per_sec": 0, 00:15:54.729 "r_mbytes_per_sec": 0, 00:15:54.729 "w_mbytes_per_sec": 0 00:15:54.729 }, 00:15:54.729 "claimed": true, 00:15:54.729 "claim_type": "exclusive_write", 00:15:54.729 "zoned": false, 00:15:54.729 "supported_io_types": { 00:15:54.729 "read": true, 00:15:54.729 "write": true, 00:15:54.729 "unmap": true, 00:15:54.729 "flush": true, 00:15:54.729 "reset": true, 00:15:54.729 "nvme_admin": false, 00:15:54.729 "nvme_io": false, 00:15:54.729 "nvme_io_md": false, 00:15:54.729 "write_zeroes": true, 00:15:54.729 "zcopy": true, 00:15:54.729 "get_zone_info": false, 00:15:54.729 "zone_management": false, 00:15:54.729 "zone_append": false, 00:15:54.729 "compare": false, 00:15:54.729 "compare_and_write": false, 00:15:54.729 "abort": true, 00:15:54.729 "seek_hole": false, 00:15:54.729 "seek_data": false, 00:15:54.729 "copy": true, 00:15:54.729 "nvme_iov_md": false 00:15:54.729 }, 00:15:54.729 "memory_domains": [ 00:15:54.729 { 00:15:54.729 "dma_device_id": "system", 00:15:54.729 "dma_device_type": 1 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.729 "dma_device_type": 2 00:15:54.729 } 00:15:54.729 ], 00:15:54.729 "driver_specific": {} 00:15:54.729 } 00:15:54.729 ] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.729 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.994 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.994 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.994 "name": "Existed_Raid", 00:15:54.994 "uuid": "21ebee1a-8c5f-410a-b402-e28cd7bde787", 00:15:54.994 "strip_size_kb": 64, 00:15:54.994 "state": "configuring", 00:15:54.994 "raid_level": "raid5f", 00:15:54.994 "superblock": true, 00:15:54.994 "num_base_bdevs": 4, 00:15:54.994 "num_base_bdevs_discovered": 1, 00:15:54.994 "num_base_bdevs_operational": 4, 00:15:54.994 "base_bdevs_list": [ 00:15:54.994 { 00:15:54.994 "name": "BaseBdev1", 00:15:54.994 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:54.994 "is_configured": true, 00:15:54.994 "data_offset": 2048, 00:15:54.994 "data_size": 63488 00:15:54.994 }, 00:15:54.994 { 00:15:54.994 "name": "BaseBdev2", 00:15:54.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.994 "is_configured": false, 00:15:54.994 "data_offset": 0, 00:15:54.994 "data_size": 0 00:15:54.994 }, 00:15:54.994 { 00:15:54.994 "name": "BaseBdev3", 00:15:54.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.994 "is_configured": false, 00:15:54.994 "data_offset": 0, 00:15:54.994 "data_size": 0 00:15:54.994 }, 00:15:54.994 { 00:15:54.994 "name": "BaseBdev4", 00:15:54.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.994 "is_configured": false, 00:15:54.994 "data_offset": 0, 00:15:54.994 "data_size": 0 00:15:54.994 } 00:15:54.994 ] 00:15:54.994 }' 00:15:54.994 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.994 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 [2024-10-25 17:57:13.606129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.260 [2024-10-25 17:57:13.606196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 [2024-10-25 17:57:13.618195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.260 [2024-10-25 17:57:13.620098] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.260 [2024-10-25 17:57:13.620168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.260 [2024-10-25 17:57:13.620200] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.260 [2024-10-25 17:57:13.620225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.260 [2024-10-25 17:57:13.620279] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.260 [2024-10-25 17:57:13.620308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.260 "name": "Existed_Raid", 00:15:55.260 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:55.260 "strip_size_kb": 64, 00:15:55.260 "state": "configuring", 00:15:55.260 "raid_level": "raid5f", 00:15:55.260 "superblock": true, 00:15:55.260 "num_base_bdevs": 4, 00:15:55.260 "num_base_bdevs_discovered": 1, 00:15:55.260 "num_base_bdevs_operational": 4, 00:15:55.260 "base_bdevs_list": [ 00:15:55.260 { 00:15:55.260 "name": "BaseBdev1", 00:15:55.260 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:55.260 "is_configured": true, 00:15:55.260 "data_offset": 2048, 00:15:55.260 "data_size": 63488 00:15:55.260 }, 00:15:55.260 { 00:15:55.260 "name": "BaseBdev2", 00:15:55.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.260 "is_configured": false, 00:15:55.260 "data_offset": 0, 00:15:55.260 "data_size": 0 00:15:55.260 }, 00:15:55.260 { 00:15:55.260 "name": "BaseBdev3", 00:15:55.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.260 "is_configured": false, 00:15:55.260 "data_offset": 0, 00:15:55.260 "data_size": 0 00:15:55.260 }, 00:15:55.260 { 00:15:55.260 "name": "BaseBdev4", 00:15:55.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.260 "is_configured": false, 00:15:55.260 "data_offset": 0, 00:15:55.260 "data_size": 0 00:15:55.260 } 00:15:55.260 ] 00:15:55.260 }' 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.260 17:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 [2024-10-25 17:57:14.135461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.827 BaseBdev2 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 [ 00:15:55.827 { 00:15:55.827 "name": "BaseBdev2", 00:15:55.827 "aliases": [ 00:15:55.827 "853db74c-98dc-4260-8d3f-1a7e8329365c" 00:15:55.827 ], 00:15:55.827 "product_name": "Malloc disk", 00:15:55.827 "block_size": 512, 00:15:55.827 "num_blocks": 65536, 00:15:55.827 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:55.827 "assigned_rate_limits": { 00:15:55.827 "rw_ios_per_sec": 0, 00:15:55.827 "rw_mbytes_per_sec": 0, 00:15:55.827 "r_mbytes_per_sec": 0, 00:15:55.827 "w_mbytes_per_sec": 0 00:15:55.827 }, 00:15:55.827 "claimed": true, 00:15:55.827 "claim_type": "exclusive_write", 00:15:55.827 "zoned": false, 00:15:55.827 "supported_io_types": { 00:15:55.827 "read": true, 00:15:55.827 "write": true, 00:15:55.827 "unmap": true, 00:15:55.827 "flush": true, 00:15:55.827 "reset": true, 00:15:55.827 "nvme_admin": false, 00:15:55.827 "nvme_io": false, 00:15:55.827 "nvme_io_md": false, 00:15:55.827 "write_zeroes": true, 00:15:55.827 "zcopy": true, 00:15:55.827 "get_zone_info": false, 00:15:55.827 "zone_management": false, 00:15:55.827 "zone_append": false, 00:15:55.827 "compare": false, 00:15:55.827 "compare_and_write": false, 00:15:55.827 "abort": true, 00:15:55.827 "seek_hole": false, 00:15:55.827 "seek_data": false, 00:15:55.827 "copy": true, 00:15:55.827 "nvme_iov_md": false 00:15:55.827 }, 00:15:55.827 "memory_domains": [ 00:15:55.827 { 00:15:55.827 "dma_device_id": "system", 00:15:55.827 "dma_device_type": 1 00:15:55.827 }, 00:15:55.827 { 00:15:55.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.827 "dma_device_type": 2 00:15:55.827 } 00:15:55.827 ], 00:15:55.827 "driver_specific": {} 00:15:55.827 } 00:15:55.827 ] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.827 "name": "Existed_Raid", 00:15:55.827 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:55.827 "strip_size_kb": 64, 00:15:55.827 "state": "configuring", 00:15:55.827 "raid_level": "raid5f", 00:15:55.827 "superblock": true, 00:15:55.827 "num_base_bdevs": 4, 00:15:55.827 "num_base_bdevs_discovered": 2, 00:15:55.827 "num_base_bdevs_operational": 4, 00:15:55.827 "base_bdevs_list": [ 00:15:55.827 { 00:15:55.827 "name": "BaseBdev1", 00:15:55.827 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:55.827 "is_configured": true, 00:15:55.827 "data_offset": 2048, 00:15:55.827 "data_size": 63488 00:15:55.827 }, 00:15:55.827 { 00:15:55.827 "name": "BaseBdev2", 00:15:55.827 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:55.827 "is_configured": true, 00:15:55.827 "data_offset": 2048, 00:15:55.827 "data_size": 63488 00:15:55.827 }, 00:15:55.827 { 00:15:55.827 "name": "BaseBdev3", 00:15:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.827 "is_configured": false, 00:15:55.827 "data_offset": 0, 00:15:55.827 "data_size": 0 00:15:55.827 }, 00:15:55.827 { 00:15:55.827 "name": "BaseBdev4", 00:15:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.827 "is_configured": false, 00:15:55.827 "data_offset": 0, 00:15:55.827 "data_size": 0 00:15:55.827 } 00:15:55.827 ] 00:15:55.827 }' 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.827 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.396 [2024-10-25 17:57:14.695653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.396 BaseBdev3 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.396 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 [ 00:15:56.397 { 00:15:56.397 "name": "BaseBdev3", 00:15:56.397 "aliases": [ 00:15:56.397 "28126fa5-a45d-4244-870f-fb5dd3b17fc4" 00:15:56.397 ], 00:15:56.397 "product_name": "Malloc disk", 00:15:56.397 "block_size": 512, 00:15:56.397 "num_blocks": 65536, 00:15:56.397 "uuid": "28126fa5-a45d-4244-870f-fb5dd3b17fc4", 00:15:56.397 "assigned_rate_limits": { 00:15:56.397 "rw_ios_per_sec": 0, 00:15:56.397 "rw_mbytes_per_sec": 0, 00:15:56.397 "r_mbytes_per_sec": 0, 00:15:56.397 "w_mbytes_per_sec": 0 00:15:56.397 }, 00:15:56.397 "claimed": true, 00:15:56.397 "claim_type": "exclusive_write", 00:15:56.397 "zoned": false, 00:15:56.397 "supported_io_types": { 00:15:56.397 "read": true, 00:15:56.397 "write": true, 00:15:56.397 "unmap": true, 00:15:56.397 "flush": true, 00:15:56.397 "reset": true, 00:15:56.397 "nvme_admin": false, 00:15:56.397 "nvme_io": false, 00:15:56.397 "nvme_io_md": false, 00:15:56.397 "write_zeroes": true, 00:15:56.397 "zcopy": true, 00:15:56.397 "get_zone_info": false, 00:15:56.397 "zone_management": false, 00:15:56.397 "zone_append": false, 00:15:56.397 "compare": false, 00:15:56.397 "compare_and_write": false, 00:15:56.397 "abort": true, 00:15:56.397 "seek_hole": false, 00:15:56.397 "seek_data": false, 00:15:56.397 "copy": true, 00:15:56.397 "nvme_iov_md": false 00:15:56.397 }, 00:15:56.397 "memory_domains": [ 00:15:56.397 { 00:15:56.397 "dma_device_id": "system", 00:15:56.397 "dma_device_type": 1 00:15:56.397 }, 00:15:56.397 { 00:15:56.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.397 "dma_device_type": 2 00:15:56.397 } 00:15:56.397 ], 00:15:56.397 "driver_specific": {} 00:15:56.397 } 00:15:56.397 ] 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.397 "name": "Existed_Raid", 00:15:56.397 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:56.397 "strip_size_kb": 64, 00:15:56.397 "state": "configuring", 00:15:56.397 "raid_level": "raid5f", 00:15:56.397 "superblock": true, 00:15:56.397 "num_base_bdevs": 4, 00:15:56.397 "num_base_bdevs_discovered": 3, 00:15:56.397 "num_base_bdevs_operational": 4, 00:15:56.397 "base_bdevs_list": [ 00:15:56.397 { 00:15:56.397 "name": "BaseBdev1", 00:15:56.397 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:56.397 "is_configured": true, 00:15:56.397 "data_offset": 2048, 00:15:56.397 "data_size": 63488 00:15:56.397 }, 00:15:56.397 { 00:15:56.397 "name": "BaseBdev2", 00:15:56.397 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:56.397 "is_configured": true, 00:15:56.397 "data_offset": 2048, 00:15:56.397 "data_size": 63488 00:15:56.397 }, 00:15:56.397 { 00:15:56.397 "name": "BaseBdev3", 00:15:56.397 "uuid": "28126fa5-a45d-4244-870f-fb5dd3b17fc4", 00:15:56.397 "is_configured": true, 00:15:56.397 "data_offset": 2048, 00:15:56.397 "data_size": 63488 00:15:56.397 }, 00:15:56.397 { 00:15:56.397 "name": "BaseBdev4", 00:15:56.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.397 "is_configured": false, 00:15:56.397 "data_offset": 0, 00:15:56.397 "data_size": 0 00:15:56.397 } 00:15:56.397 ] 00:15:56.397 }' 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.397 17:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.966 [2024-10-25 17:57:15.254466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.966 [2024-10-25 17:57:15.254755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:56.966 [2024-10-25 17:57:15.254770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:56.966 [2024-10-25 17:57:15.255091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:56.966 BaseBdev4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.966 [2024-10-25 17:57:15.262925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:56.966 [2024-10-25 17:57:15.262997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:56.966 [2024-10-25 17:57:15.263312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.966 [ 00:15:56.966 { 00:15:56.966 "name": "BaseBdev4", 00:15:56.966 "aliases": [ 00:15:56.966 "525f3f69-86b3-44c1-b245-28f189f78101" 00:15:56.966 ], 00:15:56.966 "product_name": "Malloc disk", 00:15:56.966 "block_size": 512, 00:15:56.966 "num_blocks": 65536, 00:15:56.966 "uuid": "525f3f69-86b3-44c1-b245-28f189f78101", 00:15:56.966 "assigned_rate_limits": { 00:15:56.966 "rw_ios_per_sec": 0, 00:15:56.966 "rw_mbytes_per_sec": 0, 00:15:56.966 "r_mbytes_per_sec": 0, 00:15:56.966 "w_mbytes_per_sec": 0 00:15:56.966 }, 00:15:56.966 "claimed": true, 00:15:56.966 "claim_type": "exclusive_write", 00:15:56.966 "zoned": false, 00:15:56.966 "supported_io_types": { 00:15:56.966 "read": true, 00:15:56.966 "write": true, 00:15:56.966 "unmap": true, 00:15:56.966 "flush": true, 00:15:56.966 "reset": true, 00:15:56.966 "nvme_admin": false, 00:15:56.966 "nvme_io": false, 00:15:56.966 "nvme_io_md": false, 00:15:56.966 "write_zeroes": true, 00:15:56.966 "zcopy": true, 00:15:56.966 "get_zone_info": false, 00:15:56.966 "zone_management": false, 00:15:56.966 "zone_append": false, 00:15:56.966 "compare": false, 00:15:56.966 "compare_and_write": false, 00:15:56.966 "abort": true, 00:15:56.966 "seek_hole": false, 00:15:56.966 "seek_data": false, 00:15:56.966 "copy": true, 00:15:56.966 "nvme_iov_md": false 00:15:56.966 }, 00:15:56.966 "memory_domains": [ 00:15:56.966 { 00:15:56.966 "dma_device_id": "system", 00:15:56.966 "dma_device_type": 1 00:15:56.966 }, 00:15:56.966 { 00:15:56.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.966 "dma_device_type": 2 00:15:56.966 } 00:15:56.966 ], 00:15:56.966 "driver_specific": {} 00:15:56.966 } 00:15:56.966 ] 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.966 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.967 "name": "Existed_Raid", 00:15:56.967 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:56.967 "strip_size_kb": 64, 00:15:56.967 "state": "online", 00:15:56.967 "raid_level": "raid5f", 00:15:56.967 "superblock": true, 00:15:56.967 "num_base_bdevs": 4, 00:15:56.967 "num_base_bdevs_discovered": 4, 00:15:56.967 "num_base_bdevs_operational": 4, 00:15:56.967 "base_bdevs_list": [ 00:15:56.967 { 00:15:56.967 "name": "BaseBdev1", 00:15:56.967 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev2", 00:15:56.967 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev3", 00:15:56.967 "uuid": "28126fa5-a45d-4244-870f-fb5dd3b17fc4", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev4", 00:15:56.967 "uuid": "525f3f69-86b3-44c1-b245-28f189f78101", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 } 00:15:56.967 ] 00:15:56.967 }' 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.967 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.535 [2024-10-25 17:57:15.767458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.535 "name": "Existed_Raid", 00:15:57.535 "aliases": [ 00:15:57.535 "c6b053d8-d528-49df-be9b-4c259834c9ac" 00:15:57.535 ], 00:15:57.535 "product_name": "Raid Volume", 00:15:57.535 "block_size": 512, 00:15:57.535 "num_blocks": 190464, 00:15:57.535 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:57.535 "assigned_rate_limits": { 00:15:57.535 "rw_ios_per_sec": 0, 00:15:57.535 "rw_mbytes_per_sec": 0, 00:15:57.535 "r_mbytes_per_sec": 0, 00:15:57.535 "w_mbytes_per_sec": 0 00:15:57.535 }, 00:15:57.535 "claimed": false, 00:15:57.535 "zoned": false, 00:15:57.535 "supported_io_types": { 00:15:57.535 "read": true, 00:15:57.535 "write": true, 00:15:57.535 "unmap": false, 00:15:57.535 "flush": false, 00:15:57.535 "reset": true, 00:15:57.535 "nvme_admin": false, 00:15:57.535 "nvme_io": false, 00:15:57.535 "nvme_io_md": false, 00:15:57.535 "write_zeroes": true, 00:15:57.535 "zcopy": false, 00:15:57.535 "get_zone_info": false, 00:15:57.535 "zone_management": false, 00:15:57.535 "zone_append": false, 00:15:57.535 "compare": false, 00:15:57.535 "compare_and_write": false, 00:15:57.535 "abort": false, 00:15:57.535 "seek_hole": false, 00:15:57.535 "seek_data": false, 00:15:57.535 "copy": false, 00:15:57.535 "nvme_iov_md": false 00:15:57.535 }, 00:15:57.535 "driver_specific": { 00:15:57.535 "raid": { 00:15:57.535 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:57.535 "strip_size_kb": 64, 00:15:57.535 "state": "online", 00:15:57.535 "raid_level": "raid5f", 00:15:57.535 "superblock": true, 00:15:57.535 "num_base_bdevs": 4, 00:15:57.535 "num_base_bdevs_discovered": 4, 00:15:57.535 "num_base_bdevs_operational": 4, 00:15:57.535 "base_bdevs_list": [ 00:15:57.535 { 00:15:57.535 "name": "BaseBdev1", 00:15:57.535 "uuid": "2576c489-783c-4ace-a47b-fc4fb338f21d", 00:15:57.535 "is_configured": true, 00:15:57.535 "data_offset": 2048, 00:15:57.535 "data_size": 63488 00:15:57.535 }, 00:15:57.535 { 00:15:57.535 "name": "BaseBdev2", 00:15:57.535 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:57.535 "is_configured": true, 00:15:57.535 "data_offset": 2048, 00:15:57.535 "data_size": 63488 00:15:57.535 }, 00:15:57.535 { 00:15:57.535 "name": "BaseBdev3", 00:15:57.535 "uuid": "28126fa5-a45d-4244-870f-fb5dd3b17fc4", 00:15:57.535 "is_configured": true, 00:15:57.535 "data_offset": 2048, 00:15:57.535 "data_size": 63488 00:15:57.535 }, 00:15:57.535 { 00:15:57.535 "name": "BaseBdev4", 00:15:57.535 "uuid": "525f3f69-86b3-44c1-b245-28f189f78101", 00:15:57.535 "is_configured": true, 00:15:57.535 "data_offset": 2048, 00:15:57.535 "data_size": 63488 00:15:57.535 } 00:15:57.535 ] 00:15:57.535 } 00:15:57.535 } 00:15:57.535 }' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:57.535 BaseBdev2 00:15:57.535 BaseBdev3 00:15:57.535 BaseBdev4' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.535 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.536 17:57:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.794 17:57:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.794 [2024-10-25 17:57:16.098764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.794 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.053 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.053 "name": "Existed_Raid", 00:15:58.053 "uuid": "c6b053d8-d528-49df-be9b-4c259834c9ac", 00:15:58.053 "strip_size_kb": 64, 00:15:58.053 "state": "online", 00:15:58.053 "raid_level": "raid5f", 00:15:58.053 "superblock": true, 00:15:58.053 "num_base_bdevs": 4, 00:15:58.053 "num_base_bdevs_discovered": 3, 00:15:58.053 "num_base_bdevs_operational": 3, 00:15:58.053 "base_bdevs_list": [ 00:15:58.053 { 00:15:58.053 "name": null, 00:15:58.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.053 "is_configured": false, 00:15:58.053 "data_offset": 0, 00:15:58.053 "data_size": 63488 00:15:58.053 }, 00:15:58.053 { 00:15:58.053 "name": "BaseBdev2", 00:15:58.053 "uuid": "853db74c-98dc-4260-8d3f-1a7e8329365c", 00:15:58.053 "is_configured": true, 00:15:58.053 "data_offset": 2048, 00:15:58.053 "data_size": 63488 00:15:58.053 }, 00:15:58.053 { 00:15:58.053 "name": "BaseBdev3", 00:15:58.053 "uuid": "28126fa5-a45d-4244-870f-fb5dd3b17fc4", 00:15:58.053 "is_configured": true, 00:15:58.053 "data_offset": 2048, 00:15:58.053 "data_size": 63488 00:15:58.053 }, 00:15:58.053 { 00:15:58.053 "name": "BaseBdev4", 00:15:58.053 "uuid": "525f3f69-86b3-44c1-b245-28f189f78101", 00:15:58.053 "is_configured": true, 00:15:58.053 "data_offset": 2048, 00:15:58.053 "data_size": 63488 00:15:58.053 } 00:15:58.053 ] 00:15:58.053 }' 00:15:58.053 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.053 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.312 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.313 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.313 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:58.313 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.313 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.313 [2024-10-25 17:57:16.731773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.313 [2024-10-25 17:57:16.731971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.572 [2024-10-25 17:57:16.827894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.572 [2024-10-25 17:57:16.883800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.572 17:57:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.572 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.832 [2024-10-25 17:57:17.024745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:58.832 [2024-10-25 17:57:17.024807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.832 BaseBdev2 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.832 [ 00:15:58.832 { 00:15:58.832 "name": "BaseBdev2", 00:15:58.832 "aliases": [ 00:15:58.832 "4e5778ee-3fac-48f1-8187-f43788673c81" 00:15:58.832 ], 00:15:58.832 "product_name": "Malloc disk", 00:15:58.832 "block_size": 512, 00:15:58.832 "num_blocks": 65536, 00:15:58.832 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:15:58.832 "assigned_rate_limits": { 00:15:58.832 "rw_ios_per_sec": 0, 00:15:58.832 "rw_mbytes_per_sec": 0, 00:15:58.832 "r_mbytes_per_sec": 0, 00:15:58.832 "w_mbytes_per_sec": 0 00:15:58.832 }, 00:15:58.832 "claimed": false, 00:15:58.832 "zoned": false, 00:15:58.832 "supported_io_types": { 00:15:58.832 "read": true, 00:15:58.832 "write": true, 00:15:58.832 "unmap": true, 00:15:58.832 "flush": true, 00:15:58.832 "reset": true, 00:15:58.832 "nvme_admin": false, 00:15:58.832 "nvme_io": false, 00:15:58.832 "nvme_io_md": false, 00:15:58.832 "write_zeroes": true, 00:15:58.832 "zcopy": true, 00:15:58.832 "get_zone_info": false, 00:15:58.832 "zone_management": false, 00:15:58.832 "zone_append": false, 00:15:58.832 "compare": false, 00:15:58.832 "compare_and_write": false, 00:15:58.832 "abort": true, 00:15:58.832 "seek_hole": false, 00:15:58.832 "seek_data": false, 00:15:58.832 "copy": true, 00:15:58.832 "nvme_iov_md": false 00:15:58.832 }, 00:15:58.832 "memory_domains": [ 00:15:58.832 { 00:15:58.832 "dma_device_id": "system", 00:15:58.832 "dma_device_type": 1 00:15:58.832 }, 00:15:58.832 { 00:15:58.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.832 "dma_device_type": 2 00:15:58.832 } 00:15:58.832 ], 00:15:58.832 "driver_specific": {} 00:15:58.832 } 00:15:58.832 ] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.832 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.092 BaseBdev3 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.092 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.092 [ 00:15:59.092 { 00:15:59.092 "name": "BaseBdev3", 00:15:59.092 "aliases": [ 00:15:59.092 "9361c21b-2eb7-4e63-a9f1-da595e6c98b9" 00:15:59.092 ], 00:15:59.092 "product_name": "Malloc disk", 00:15:59.092 "block_size": 512, 00:15:59.092 "num_blocks": 65536, 00:15:59.092 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:15:59.092 "assigned_rate_limits": { 00:15:59.092 "rw_ios_per_sec": 0, 00:15:59.092 "rw_mbytes_per_sec": 0, 00:15:59.092 "r_mbytes_per_sec": 0, 00:15:59.092 "w_mbytes_per_sec": 0 00:15:59.092 }, 00:15:59.092 "claimed": false, 00:15:59.092 "zoned": false, 00:15:59.092 "supported_io_types": { 00:15:59.093 "read": true, 00:15:59.093 "write": true, 00:15:59.093 "unmap": true, 00:15:59.093 "flush": true, 00:15:59.093 "reset": true, 00:15:59.093 "nvme_admin": false, 00:15:59.093 "nvme_io": false, 00:15:59.093 "nvme_io_md": false, 00:15:59.093 "write_zeroes": true, 00:15:59.093 "zcopy": true, 00:15:59.093 "get_zone_info": false, 00:15:59.093 "zone_management": false, 00:15:59.093 "zone_append": false, 00:15:59.093 "compare": false, 00:15:59.093 "compare_and_write": false, 00:15:59.093 "abort": true, 00:15:59.093 "seek_hole": false, 00:15:59.093 "seek_data": false, 00:15:59.093 "copy": true, 00:15:59.093 "nvme_iov_md": false 00:15:59.093 }, 00:15:59.093 "memory_domains": [ 00:15:59.093 { 00:15:59.093 "dma_device_id": "system", 00:15:59.093 "dma_device_type": 1 00:15:59.093 }, 00:15:59.093 { 00:15:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.093 "dma_device_type": 2 00:15:59.093 } 00:15:59.093 ], 00:15:59.093 "driver_specific": {} 00:15:59.093 } 00:15:59.093 ] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.093 BaseBdev4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.093 [ 00:15:59.093 { 00:15:59.093 "name": "BaseBdev4", 00:15:59.093 "aliases": [ 00:15:59.093 "d50c9070-e5f0-43ca-9a59-57b467fc9b0b" 00:15:59.093 ], 00:15:59.093 "product_name": "Malloc disk", 00:15:59.093 "block_size": 512, 00:15:59.093 "num_blocks": 65536, 00:15:59.093 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:15:59.093 "assigned_rate_limits": { 00:15:59.093 "rw_ios_per_sec": 0, 00:15:59.093 "rw_mbytes_per_sec": 0, 00:15:59.093 "r_mbytes_per_sec": 0, 00:15:59.093 "w_mbytes_per_sec": 0 00:15:59.093 }, 00:15:59.093 "claimed": false, 00:15:59.093 "zoned": false, 00:15:59.093 "supported_io_types": { 00:15:59.093 "read": true, 00:15:59.093 "write": true, 00:15:59.093 "unmap": true, 00:15:59.093 "flush": true, 00:15:59.093 "reset": true, 00:15:59.093 "nvme_admin": false, 00:15:59.093 "nvme_io": false, 00:15:59.093 "nvme_io_md": false, 00:15:59.093 "write_zeroes": true, 00:15:59.093 "zcopy": true, 00:15:59.093 "get_zone_info": false, 00:15:59.093 "zone_management": false, 00:15:59.093 "zone_append": false, 00:15:59.093 "compare": false, 00:15:59.093 "compare_and_write": false, 00:15:59.093 "abort": true, 00:15:59.093 "seek_hole": false, 00:15:59.093 "seek_data": false, 00:15:59.093 "copy": true, 00:15:59.093 "nvme_iov_md": false 00:15:59.093 }, 00:15:59.093 "memory_domains": [ 00:15:59.093 { 00:15:59.093 "dma_device_id": "system", 00:15:59.093 "dma_device_type": 1 00:15:59.093 }, 00:15:59.093 { 00:15:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.093 "dma_device_type": 2 00:15:59.093 } 00:15:59.093 ], 00:15:59.093 "driver_specific": {} 00:15:59.093 } 00:15:59.093 ] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.093 [2024-10-25 17:57:17.424944] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.093 [2024-10-25 17:57:17.425071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.093 [2024-10-25 17:57:17.425143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.093 [2024-10-25 17:57:17.427034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.093 [2024-10-25 17:57:17.427162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.093 "name": "Existed_Raid", 00:15:59.093 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:15:59.093 "strip_size_kb": 64, 00:15:59.093 "state": "configuring", 00:15:59.093 "raid_level": "raid5f", 00:15:59.093 "superblock": true, 00:15:59.093 "num_base_bdevs": 4, 00:15:59.093 "num_base_bdevs_discovered": 3, 00:15:59.093 "num_base_bdevs_operational": 4, 00:15:59.093 "base_bdevs_list": [ 00:15:59.093 { 00:15:59.093 "name": "BaseBdev1", 00:15:59.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.093 "is_configured": false, 00:15:59.093 "data_offset": 0, 00:15:59.093 "data_size": 0 00:15:59.093 }, 00:15:59.093 { 00:15:59.093 "name": "BaseBdev2", 00:15:59.093 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:15:59.093 "is_configured": true, 00:15:59.093 "data_offset": 2048, 00:15:59.093 "data_size": 63488 00:15:59.093 }, 00:15:59.093 { 00:15:59.093 "name": "BaseBdev3", 00:15:59.093 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:15:59.093 "is_configured": true, 00:15:59.093 "data_offset": 2048, 00:15:59.093 "data_size": 63488 00:15:59.093 }, 00:15:59.093 { 00:15:59.093 "name": "BaseBdev4", 00:15:59.093 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:15:59.093 "is_configured": true, 00:15:59.093 "data_offset": 2048, 00:15:59.093 "data_size": 63488 00:15:59.093 } 00:15:59.093 ] 00:15:59.093 }' 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.093 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.662 [2024-10-25 17:57:17.900110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.662 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.662 "name": "Existed_Raid", 00:15:59.662 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:15:59.662 "strip_size_kb": 64, 00:15:59.662 "state": "configuring", 00:15:59.663 "raid_level": "raid5f", 00:15:59.663 "superblock": true, 00:15:59.663 "num_base_bdevs": 4, 00:15:59.663 "num_base_bdevs_discovered": 2, 00:15:59.663 "num_base_bdevs_operational": 4, 00:15:59.663 "base_bdevs_list": [ 00:15:59.663 { 00:15:59.663 "name": "BaseBdev1", 00:15:59.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.663 "is_configured": false, 00:15:59.663 "data_offset": 0, 00:15:59.663 "data_size": 0 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": null, 00:15:59.663 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:15:59.663 "is_configured": false, 00:15:59.663 "data_offset": 0, 00:15:59.663 "data_size": 63488 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": "BaseBdev3", 00:15:59.663 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 }, 00:15:59.663 { 00:15:59.663 "name": "BaseBdev4", 00:15:59.663 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:15:59.663 "is_configured": true, 00:15:59.663 "data_offset": 2048, 00:15:59.663 "data_size": 63488 00:15:59.663 } 00:15:59.663 ] 00:15:59.663 }' 00:15:59.663 17:57:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.663 17:57:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 [2024-10-25 17:57:18.464293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.230 BaseBdev1 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 [ 00:16:00.230 { 00:16:00.230 "name": "BaseBdev1", 00:16:00.230 "aliases": [ 00:16:00.230 "93f5affd-b0b7-47ea-991f-f50fc9fed656" 00:16:00.230 ], 00:16:00.230 "product_name": "Malloc disk", 00:16:00.230 "block_size": 512, 00:16:00.230 "num_blocks": 65536, 00:16:00.230 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:00.230 "assigned_rate_limits": { 00:16:00.230 "rw_ios_per_sec": 0, 00:16:00.230 "rw_mbytes_per_sec": 0, 00:16:00.230 "r_mbytes_per_sec": 0, 00:16:00.230 "w_mbytes_per_sec": 0 00:16:00.230 }, 00:16:00.230 "claimed": true, 00:16:00.230 "claim_type": "exclusive_write", 00:16:00.230 "zoned": false, 00:16:00.230 "supported_io_types": { 00:16:00.230 "read": true, 00:16:00.230 "write": true, 00:16:00.230 "unmap": true, 00:16:00.230 "flush": true, 00:16:00.230 "reset": true, 00:16:00.230 "nvme_admin": false, 00:16:00.230 "nvme_io": false, 00:16:00.230 "nvme_io_md": false, 00:16:00.230 "write_zeroes": true, 00:16:00.230 "zcopy": true, 00:16:00.230 "get_zone_info": false, 00:16:00.230 "zone_management": false, 00:16:00.230 "zone_append": false, 00:16:00.230 "compare": false, 00:16:00.230 "compare_and_write": false, 00:16:00.230 "abort": true, 00:16:00.230 "seek_hole": false, 00:16:00.230 "seek_data": false, 00:16:00.230 "copy": true, 00:16:00.230 "nvme_iov_md": false 00:16:00.230 }, 00:16:00.230 "memory_domains": [ 00:16:00.230 { 00:16:00.230 "dma_device_id": "system", 00:16:00.230 "dma_device_type": 1 00:16:00.230 }, 00:16:00.230 { 00:16:00.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.230 "dma_device_type": 2 00:16:00.230 } 00:16:00.230 ], 00:16:00.230 "driver_specific": {} 00:16:00.230 } 00:16:00.230 ] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.230 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.230 "name": "Existed_Raid", 00:16:00.230 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:00.230 "strip_size_kb": 64, 00:16:00.230 "state": "configuring", 00:16:00.230 "raid_level": "raid5f", 00:16:00.230 "superblock": true, 00:16:00.230 "num_base_bdevs": 4, 00:16:00.230 "num_base_bdevs_discovered": 3, 00:16:00.230 "num_base_bdevs_operational": 4, 00:16:00.230 "base_bdevs_list": [ 00:16:00.230 { 00:16:00.230 "name": "BaseBdev1", 00:16:00.230 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:00.230 "is_configured": true, 00:16:00.230 "data_offset": 2048, 00:16:00.230 "data_size": 63488 00:16:00.230 }, 00:16:00.230 { 00:16:00.230 "name": null, 00:16:00.230 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:00.230 "is_configured": false, 00:16:00.230 "data_offset": 0, 00:16:00.230 "data_size": 63488 00:16:00.230 }, 00:16:00.230 { 00:16:00.230 "name": "BaseBdev3", 00:16:00.230 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:00.230 "is_configured": true, 00:16:00.230 "data_offset": 2048, 00:16:00.230 "data_size": 63488 00:16:00.230 }, 00:16:00.230 { 00:16:00.231 "name": "BaseBdev4", 00:16:00.231 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:00.231 "is_configured": true, 00:16:00.231 "data_offset": 2048, 00:16:00.231 "data_size": 63488 00:16:00.231 } 00:16:00.231 ] 00:16:00.231 }' 00:16:00.231 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.231 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.489 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.489 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.489 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.489 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.747 [2024-10-25 17:57:18.967520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.747 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.748 17:57:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.748 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.748 "name": "Existed_Raid", 00:16:00.748 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:00.748 "strip_size_kb": 64, 00:16:00.748 "state": "configuring", 00:16:00.748 "raid_level": "raid5f", 00:16:00.748 "superblock": true, 00:16:00.748 "num_base_bdevs": 4, 00:16:00.748 "num_base_bdevs_discovered": 2, 00:16:00.748 "num_base_bdevs_operational": 4, 00:16:00.748 "base_bdevs_list": [ 00:16:00.748 { 00:16:00.748 "name": "BaseBdev1", 00:16:00.748 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:00.748 "is_configured": true, 00:16:00.748 "data_offset": 2048, 00:16:00.748 "data_size": 63488 00:16:00.748 }, 00:16:00.748 { 00:16:00.748 "name": null, 00:16:00.748 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:00.748 "is_configured": false, 00:16:00.748 "data_offset": 0, 00:16:00.748 "data_size": 63488 00:16:00.748 }, 00:16:00.748 { 00:16:00.748 "name": null, 00:16:00.748 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:00.748 "is_configured": false, 00:16:00.748 "data_offset": 0, 00:16:00.748 "data_size": 63488 00:16:00.748 }, 00:16:00.748 { 00:16:00.748 "name": "BaseBdev4", 00:16:00.748 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:00.748 "is_configured": true, 00:16:00.748 "data_offset": 2048, 00:16:00.748 "data_size": 63488 00:16:00.748 } 00:16:00.748 ] 00:16:00.748 }' 00:16:00.748 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.748 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.006 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.006 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.006 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.006 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.006 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.264 [2024-10-25 17:57:19.478639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.264 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.264 "name": "Existed_Raid", 00:16:01.264 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:01.264 "strip_size_kb": 64, 00:16:01.264 "state": "configuring", 00:16:01.264 "raid_level": "raid5f", 00:16:01.264 "superblock": true, 00:16:01.264 "num_base_bdevs": 4, 00:16:01.264 "num_base_bdevs_discovered": 3, 00:16:01.264 "num_base_bdevs_operational": 4, 00:16:01.264 "base_bdevs_list": [ 00:16:01.264 { 00:16:01.265 "name": "BaseBdev1", 00:16:01.265 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:01.265 "is_configured": true, 00:16:01.265 "data_offset": 2048, 00:16:01.265 "data_size": 63488 00:16:01.265 }, 00:16:01.265 { 00:16:01.265 "name": null, 00:16:01.265 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:01.265 "is_configured": false, 00:16:01.265 "data_offset": 0, 00:16:01.265 "data_size": 63488 00:16:01.265 }, 00:16:01.265 { 00:16:01.265 "name": "BaseBdev3", 00:16:01.265 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:01.265 "is_configured": true, 00:16:01.265 "data_offset": 2048, 00:16:01.265 "data_size": 63488 00:16:01.265 }, 00:16:01.265 { 00:16:01.265 "name": "BaseBdev4", 00:16:01.265 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:01.265 "is_configured": true, 00:16:01.265 "data_offset": 2048, 00:16:01.265 "data_size": 63488 00:16:01.265 } 00:16:01.265 ] 00:16:01.265 }' 00:16:01.265 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.265 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.831 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.831 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.831 17:57:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.831 17:57:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.831 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.831 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:01.831 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:01.831 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.832 [2024-10-25 17:57:20.037732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.832 "name": "Existed_Raid", 00:16:01.832 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:01.832 "strip_size_kb": 64, 00:16:01.832 "state": "configuring", 00:16:01.832 "raid_level": "raid5f", 00:16:01.832 "superblock": true, 00:16:01.832 "num_base_bdevs": 4, 00:16:01.832 "num_base_bdevs_discovered": 2, 00:16:01.832 "num_base_bdevs_operational": 4, 00:16:01.832 "base_bdevs_list": [ 00:16:01.832 { 00:16:01.832 "name": null, 00:16:01.832 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:01.832 "is_configured": false, 00:16:01.832 "data_offset": 0, 00:16:01.832 "data_size": 63488 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "name": null, 00:16:01.832 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:01.832 "is_configured": false, 00:16:01.832 "data_offset": 0, 00:16:01.832 "data_size": 63488 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "name": "BaseBdev3", 00:16:01.832 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:01.832 "is_configured": true, 00:16:01.832 "data_offset": 2048, 00:16:01.832 "data_size": 63488 00:16:01.832 }, 00:16:01.832 { 00:16:01.832 "name": "BaseBdev4", 00:16:01.832 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:01.832 "is_configured": true, 00:16:01.832 "data_offset": 2048, 00:16:01.832 "data_size": 63488 00:16:01.832 } 00:16:01.832 ] 00:16:01.832 }' 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.832 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.091 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.091 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.091 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.091 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.349 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.349 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:02.349 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:02.349 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.349 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.349 [2024-10-25 17:57:20.571143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.350 "name": "Existed_Raid", 00:16:02.350 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:02.350 "strip_size_kb": 64, 00:16:02.350 "state": "configuring", 00:16:02.350 "raid_level": "raid5f", 00:16:02.350 "superblock": true, 00:16:02.350 "num_base_bdevs": 4, 00:16:02.350 "num_base_bdevs_discovered": 3, 00:16:02.350 "num_base_bdevs_operational": 4, 00:16:02.350 "base_bdevs_list": [ 00:16:02.350 { 00:16:02.350 "name": null, 00:16:02.350 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:02.350 "is_configured": false, 00:16:02.350 "data_offset": 0, 00:16:02.350 "data_size": 63488 00:16:02.350 }, 00:16:02.350 { 00:16:02.350 "name": "BaseBdev2", 00:16:02.350 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:02.350 "is_configured": true, 00:16:02.350 "data_offset": 2048, 00:16:02.350 "data_size": 63488 00:16:02.350 }, 00:16:02.350 { 00:16:02.350 "name": "BaseBdev3", 00:16:02.350 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:02.350 "is_configured": true, 00:16:02.350 "data_offset": 2048, 00:16:02.350 "data_size": 63488 00:16:02.350 }, 00:16:02.350 { 00:16:02.350 "name": "BaseBdev4", 00:16:02.350 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:02.350 "is_configured": true, 00:16:02.350 "data_offset": 2048, 00:16:02.350 "data_size": 63488 00:16:02.350 } 00:16:02.350 ] 00:16:02.350 }' 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.350 17:57:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.607 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.607 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.607 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.607 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:02.607 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 93f5affd-b0b7-47ea-991f-f50fc9fed656 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 [2024-10-25 17:57:21.158063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:02.866 [2024-10-25 17:57:21.158427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:02.866 [2024-10-25 17:57:21.158474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.866 [2024-10-25 17:57:21.158746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:02.866 NewBaseBdev 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 [2024-10-25 17:57:21.165800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:02.866 [2024-10-25 17:57:21.165824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:02.866 [2024-10-25 17:57:21.166075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 [ 00:16:02.866 { 00:16:02.866 "name": "NewBaseBdev", 00:16:02.866 "aliases": [ 00:16:02.866 "93f5affd-b0b7-47ea-991f-f50fc9fed656" 00:16:02.866 ], 00:16:02.866 "product_name": "Malloc disk", 00:16:02.866 "block_size": 512, 00:16:02.866 "num_blocks": 65536, 00:16:02.866 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:02.866 "assigned_rate_limits": { 00:16:02.866 "rw_ios_per_sec": 0, 00:16:02.866 "rw_mbytes_per_sec": 0, 00:16:02.866 "r_mbytes_per_sec": 0, 00:16:02.866 "w_mbytes_per_sec": 0 00:16:02.866 }, 00:16:02.866 "claimed": true, 00:16:02.866 "claim_type": "exclusive_write", 00:16:02.866 "zoned": false, 00:16:02.866 "supported_io_types": { 00:16:02.866 "read": true, 00:16:02.866 "write": true, 00:16:02.866 "unmap": true, 00:16:02.866 "flush": true, 00:16:02.866 "reset": true, 00:16:02.866 "nvme_admin": false, 00:16:02.866 "nvme_io": false, 00:16:02.866 "nvme_io_md": false, 00:16:02.866 "write_zeroes": true, 00:16:02.866 "zcopy": true, 00:16:02.866 "get_zone_info": false, 00:16:02.866 "zone_management": false, 00:16:02.866 "zone_append": false, 00:16:02.866 "compare": false, 00:16:02.866 "compare_and_write": false, 00:16:02.866 "abort": true, 00:16:02.866 "seek_hole": false, 00:16:02.866 "seek_data": false, 00:16:02.866 "copy": true, 00:16:02.866 "nvme_iov_md": false 00:16:02.866 }, 00:16:02.866 "memory_domains": [ 00:16:02.866 { 00:16:02.866 "dma_device_id": "system", 00:16:02.866 "dma_device_type": 1 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.866 "dma_device_type": 2 00:16:02.866 } 00:16:02.866 ], 00:16:02.866 "driver_specific": {} 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.866 "name": "Existed_Raid", 00:16:02.866 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:02.866 "strip_size_kb": 64, 00:16:02.866 "state": "online", 00:16:02.866 "raid_level": "raid5f", 00:16:02.866 "superblock": true, 00:16:02.866 "num_base_bdevs": 4, 00:16:02.866 "num_base_bdevs_discovered": 4, 00:16:02.866 "num_base_bdevs_operational": 4, 00:16:02.866 "base_bdevs_list": [ 00:16:02.866 { 00:16:02.866 "name": "NewBaseBdev", 00:16:02.866 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:02.866 "is_configured": true, 00:16:02.866 "data_offset": 2048, 00:16:02.866 "data_size": 63488 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "name": "BaseBdev2", 00:16:02.866 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:02.866 "is_configured": true, 00:16:02.866 "data_offset": 2048, 00:16:02.866 "data_size": 63488 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "name": "BaseBdev3", 00:16:02.866 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:02.866 "is_configured": true, 00:16:02.866 "data_offset": 2048, 00:16:02.866 "data_size": 63488 00:16:02.866 }, 00:16:02.866 { 00:16:02.866 "name": "BaseBdev4", 00:16:02.866 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:02.866 "is_configured": true, 00:16:02.866 "data_offset": 2048, 00:16:02.866 "data_size": 63488 00:16:02.866 } 00:16:02.866 ] 00:16:02.866 }' 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.866 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.434 [2024-10-25 17:57:21.613712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.434 "name": "Existed_Raid", 00:16:03.434 "aliases": [ 00:16:03.434 "43000d7d-7751-4eb2-806d-f38342a091d7" 00:16:03.434 ], 00:16:03.434 "product_name": "Raid Volume", 00:16:03.434 "block_size": 512, 00:16:03.434 "num_blocks": 190464, 00:16:03.434 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:03.434 "assigned_rate_limits": { 00:16:03.434 "rw_ios_per_sec": 0, 00:16:03.434 "rw_mbytes_per_sec": 0, 00:16:03.434 "r_mbytes_per_sec": 0, 00:16:03.434 "w_mbytes_per_sec": 0 00:16:03.434 }, 00:16:03.434 "claimed": false, 00:16:03.434 "zoned": false, 00:16:03.434 "supported_io_types": { 00:16:03.434 "read": true, 00:16:03.434 "write": true, 00:16:03.434 "unmap": false, 00:16:03.434 "flush": false, 00:16:03.434 "reset": true, 00:16:03.434 "nvme_admin": false, 00:16:03.434 "nvme_io": false, 00:16:03.434 "nvme_io_md": false, 00:16:03.434 "write_zeroes": true, 00:16:03.434 "zcopy": false, 00:16:03.434 "get_zone_info": false, 00:16:03.434 "zone_management": false, 00:16:03.434 "zone_append": false, 00:16:03.434 "compare": false, 00:16:03.434 "compare_and_write": false, 00:16:03.434 "abort": false, 00:16:03.434 "seek_hole": false, 00:16:03.434 "seek_data": false, 00:16:03.434 "copy": false, 00:16:03.434 "nvme_iov_md": false 00:16:03.434 }, 00:16:03.434 "driver_specific": { 00:16:03.434 "raid": { 00:16:03.434 "uuid": "43000d7d-7751-4eb2-806d-f38342a091d7", 00:16:03.434 "strip_size_kb": 64, 00:16:03.434 "state": "online", 00:16:03.434 "raid_level": "raid5f", 00:16:03.434 "superblock": true, 00:16:03.434 "num_base_bdevs": 4, 00:16:03.434 "num_base_bdevs_discovered": 4, 00:16:03.434 "num_base_bdevs_operational": 4, 00:16:03.434 "base_bdevs_list": [ 00:16:03.434 { 00:16:03.434 "name": "NewBaseBdev", 00:16:03.434 "uuid": "93f5affd-b0b7-47ea-991f-f50fc9fed656", 00:16:03.434 "is_configured": true, 00:16:03.434 "data_offset": 2048, 00:16:03.434 "data_size": 63488 00:16:03.434 }, 00:16:03.434 { 00:16:03.434 "name": "BaseBdev2", 00:16:03.434 "uuid": "4e5778ee-3fac-48f1-8187-f43788673c81", 00:16:03.434 "is_configured": true, 00:16:03.434 "data_offset": 2048, 00:16:03.434 "data_size": 63488 00:16:03.434 }, 00:16:03.434 { 00:16:03.434 "name": "BaseBdev3", 00:16:03.434 "uuid": "9361c21b-2eb7-4e63-a9f1-da595e6c98b9", 00:16:03.434 "is_configured": true, 00:16:03.434 "data_offset": 2048, 00:16:03.434 "data_size": 63488 00:16:03.434 }, 00:16:03.434 { 00:16:03.434 "name": "BaseBdev4", 00:16:03.434 "uuid": "d50c9070-e5f0-43ca-9a59-57b467fc9b0b", 00:16:03.434 "is_configured": true, 00:16:03.434 "data_offset": 2048, 00:16:03.434 "data_size": 63488 00:16:03.434 } 00:16:03.434 ] 00:16:03.434 } 00:16:03.434 } 00:16:03.434 }' 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:03.434 BaseBdev2 00:16:03.434 BaseBdev3 00:16:03.434 BaseBdev4' 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.434 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.435 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.693 [2024-10-25 17:57:21.944939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.693 [2024-10-25 17:57:21.945029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.693 [2024-10-25 17:57:21.945146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.693 [2024-10-25 17:57:21.945486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.693 [2024-10-25 17:57:21.945498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83346 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83346 ']' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83346 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83346 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83346' 00:16:03.693 killing process with pid 83346 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83346 00:16:03.693 [2024-10-25 17:57:21.995216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.693 17:57:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83346 00:16:04.260 [2024-10-25 17:57:22.388647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.196 ************************************ 00:16:05.196 END TEST raid5f_state_function_test_sb 00:16:05.196 17:57:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:05.196 00:16:05.196 real 0m11.839s 00:16:05.196 user 0m18.811s 00:16:05.196 sys 0m2.220s 00:16:05.196 17:57:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.196 17:57:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.196 ************************************ 00:16:05.196 17:57:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:05.196 17:57:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:05.196 17:57:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.196 17:57:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.196 ************************************ 00:16:05.196 START TEST raid5f_superblock_test 00:16:05.196 ************************************ 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84021 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84021 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84021 ']' 00:16:05.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.196 17:57:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 [2024-10-25 17:57:23.667220] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:16:05.514 [2024-10-25 17:57:23.667414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84021 ] 00:16:05.514 [2024-10-25 17:57:23.839225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.772 [2024-10-25 17:57:23.958302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.772 [2024-10-25 17:57:24.165439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.772 [2024-10-25 17:57:24.165578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 malloc1 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 [2024-10-25 17:57:24.600387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.339 [2024-10-25 17:57:24.600457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.339 [2024-10-25 17:57:24.600483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:06.339 [2024-10-25 17:57:24.600493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.339 [2024-10-25 17:57:24.602607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.339 [2024-10-25 17:57:24.602644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.339 pt1 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 malloc2 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 [2024-10-25 17:57:24.656112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:06.339 [2024-10-25 17:57:24.656210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.339 [2024-10-25 17:57:24.656265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:06.339 [2024-10-25 17:57:24.656298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.339 [2024-10-25 17:57:24.658409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.339 [2024-10-25 17:57:24.658479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:06.339 pt2 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 malloc3 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 [2024-10-25 17:57:24.725152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.339 [2024-10-25 17:57:24.725262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.339 [2024-10-25 17:57:24.725302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:06.339 [2024-10-25 17:57:24.725335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.339 [2024-10-25 17:57:24.727403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.339 [2024-10-25 17:57:24.727473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.339 pt3 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.339 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 malloc4 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 [2024-10-25 17:57:24.783012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:06.599 [2024-10-25 17:57:24.783103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.599 [2024-10-25 17:57:24.783154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:06.599 [2024-10-25 17:57:24.783185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.599 [2024-10-25 17:57:24.785273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.599 [2024-10-25 17:57:24.785343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:06.599 pt4 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 [2024-10-25 17:57:24.795028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.599 [2024-10-25 17:57:24.796820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.599 [2024-10-25 17:57:24.796896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.599 [2024-10-25 17:57:24.796957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:06.599 [2024-10-25 17:57:24.797157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:06.599 [2024-10-25 17:57:24.797172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:06.599 [2024-10-25 17:57:24.797412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.599 [2024-10-25 17:57:24.804404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:06.599 [2024-10-25 17:57:24.804434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:06.599 [2024-10-25 17:57:24.804633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.599 "name": "raid_bdev1", 00:16:06.599 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:06.599 "strip_size_kb": 64, 00:16:06.599 "state": "online", 00:16:06.599 "raid_level": "raid5f", 00:16:06.599 "superblock": true, 00:16:06.599 "num_base_bdevs": 4, 00:16:06.599 "num_base_bdevs_discovered": 4, 00:16:06.599 "num_base_bdevs_operational": 4, 00:16:06.599 "base_bdevs_list": [ 00:16:06.599 { 00:16:06.599 "name": "pt1", 00:16:06.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 }, 00:16:06.599 { 00:16:06.599 "name": "pt2", 00:16:06.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 }, 00:16:06.599 { 00:16:06.599 "name": "pt3", 00:16:06.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 }, 00:16:06.599 { 00:16:06.599 "name": "pt4", 00:16:06.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.599 "is_configured": true, 00:16:06.599 "data_offset": 2048, 00:16:06.599 "data_size": 63488 00:16:06.599 } 00:16:06.599 ] 00:16:06.599 }' 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.599 17:57:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.858 [2024-10-25 17:57:25.257060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.858 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.117 "name": "raid_bdev1", 00:16:07.117 "aliases": [ 00:16:07.117 "d6a76db0-c99d-4dac-a8db-9fe581cd1c29" 00:16:07.117 ], 00:16:07.117 "product_name": "Raid Volume", 00:16:07.117 "block_size": 512, 00:16:07.117 "num_blocks": 190464, 00:16:07.117 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:07.117 "assigned_rate_limits": { 00:16:07.117 "rw_ios_per_sec": 0, 00:16:07.117 "rw_mbytes_per_sec": 0, 00:16:07.117 "r_mbytes_per_sec": 0, 00:16:07.117 "w_mbytes_per_sec": 0 00:16:07.117 }, 00:16:07.117 "claimed": false, 00:16:07.117 "zoned": false, 00:16:07.117 "supported_io_types": { 00:16:07.117 "read": true, 00:16:07.117 "write": true, 00:16:07.117 "unmap": false, 00:16:07.117 "flush": false, 00:16:07.117 "reset": true, 00:16:07.117 "nvme_admin": false, 00:16:07.117 "nvme_io": false, 00:16:07.117 "nvme_io_md": false, 00:16:07.117 "write_zeroes": true, 00:16:07.117 "zcopy": false, 00:16:07.117 "get_zone_info": false, 00:16:07.117 "zone_management": false, 00:16:07.117 "zone_append": false, 00:16:07.117 "compare": false, 00:16:07.117 "compare_and_write": false, 00:16:07.117 "abort": false, 00:16:07.117 "seek_hole": false, 00:16:07.117 "seek_data": false, 00:16:07.117 "copy": false, 00:16:07.117 "nvme_iov_md": false 00:16:07.117 }, 00:16:07.117 "driver_specific": { 00:16:07.117 "raid": { 00:16:07.117 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:07.117 "strip_size_kb": 64, 00:16:07.117 "state": "online", 00:16:07.117 "raid_level": "raid5f", 00:16:07.117 "superblock": true, 00:16:07.117 "num_base_bdevs": 4, 00:16:07.117 "num_base_bdevs_discovered": 4, 00:16:07.117 "num_base_bdevs_operational": 4, 00:16:07.117 "base_bdevs_list": [ 00:16:07.117 { 00:16:07.117 "name": "pt1", 00:16:07.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.117 "is_configured": true, 00:16:07.117 "data_offset": 2048, 00:16:07.117 "data_size": 63488 00:16:07.117 }, 00:16:07.117 { 00:16:07.117 "name": "pt2", 00:16:07.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.117 "is_configured": true, 00:16:07.117 "data_offset": 2048, 00:16:07.117 "data_size": 63488 00:16:07.117 }, 00:16:07.117 { 00:16:07.117 "name": "pt3", 00:16:07.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.117 "is_configured": true, 00:16:07.117 "data_offset": 2048, 00:16:07.117 "data_size": 63488 00:16:07.117 }, 00:16:07.117 { 00:16:07.117 "name": "pt4", 00:16:07.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.117 "is_configured": true, 00:16:07.117 "data_offset": 2048, 00:16:07.117 "data_size": 63488 00:16:07.117 } 00:16:07.117 ] 00:16:07.117 } 00:16:07.117 } 00:16:07.117 }' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:07.117 pt2 00:16:07.117 pt3 00:16:07.117 pt4' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.117 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.376 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.376 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.376 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.376 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 [2024-10-25 17:57:25.612594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d6a76db0-c99d-4dac-a8db-9fe581cd1c29 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d6a76db0-c99d-4dac-a8db-9fe581cd1c29 ']' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 [2024-10-25 17:57:25.660280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.377 [2024-10-25 17:57:25.660362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.377 [2024-10-25 17:57:25.660499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.377 [2024-10-25 17:57:25.660607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.377 [2024-10-25 17:57:25.660626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.377 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.637 [2024-10-25 17:57:25.828048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:07.637 [2024-10-25 17:57:25.830008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:07.637 [2024-10-25 17:57:25.830102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:07.637 [2024-10-25 17:57:25.830154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:07.637 [2024-10-25 17:57:25.830249] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:07.637 [2024-10-25 17:57:25.830350] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:07.637 [2024-10-25 17:57:25.830406] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:07.637 [2024-10-25 17:57:25.830471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:07.637 [2024-10-25 17:57:25.830521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.637 [2024-10-25 17:57:25.830548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:07.637 request: 00:16:07.637 { 00:16:07.637 "name": "raid_bdev1", 00:16:07.637 "raid_level": "raid5f", 00:16:07.637 "base_bdevs": [ 00:16:07.637 "malloc1", 00:16:07.637 "malloc2", 00:16:07.637 "malloc3", 00:16:07.637 "malloc4" 00:16:07.637 ], 00:16:07.637 "strip_size_kb": 64, 00:16:07.637 "superblock": false, 00:16:07.637 "method": "bdev_raid_create", 00:16:07.637 "req_id": 1 00:16:07.637 } 00:16:07.637 Got JSON-RPC error response 00:16:07.637 response: 00:16:07.637 { 00:16:07.637 "code": -17, 00:16:07.637 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:07.637 } 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.637 [2024-10-25 17:57:25.899893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.637 [2024-10-25 17:57:25.900062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.637 [2024-10-25 17:57:25.900090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:07.637 [2024-10-25 17:57:25.900103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.637 [2024-10-25 17:57:25.902656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.637 [2024-10-25 17:57:25.902709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.637 [2024-10-25 17:57:25.902813] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.637 [2024-10-25 17:57:25.902898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.637 pt1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.637 "name": "raid_bdev1", 00:16:07.637 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:07.637 "strip_size_kb": 64, 00:16:07.637 "state": "configuring", 00:16:07.637 "raid_level": "raid5f", 00:16:07.637 "superblock": true, 00:16:07.637 "num_base_bdevs": 4, 00:16:07.637 "num_base_bdevs_discovered": 1, 00:16:07.637 "num_base_bdevs_operational": 4, 00:16:07.637 "base_bdevs_list": [ 00:16:07.637 { 00:16:07.637 "name": "pt1", 00:16:07.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.637 "is_configured": true, 00:16:07.637 "data_offset": 2048, 00:16:07.637 "data_size": 63488 00:16:07.637 }, 00:16:07.637 { 00:16:07.637 "name": null, 00:16:07.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.637 "is_configured": false, 00:16:07.637 "data_offset": 2048, 00:16:07.637 "data_size": 63488 00:16:07.637 }, 00:16:07.637 { 00:16:07.637 "name": null, 00:16:07.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.637 "is_configured": false, 00:16:07.637 "data_offset": 2048, 00:16:07.637 "data_size": 63488 00:16:07.637 }, 00:16:07.637 { 00:16:07.637 "name": null, 00:16:07.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.637 "is_configured": false, 00:16:07.637 "data_offset": 2048, 00:16:07.637 "data_size": 63488 00:16:07.637 } 00:16:07.637 ] 00:16:07.637 }' 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.637 17:57:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.896 [2024-10-25 17:57:26.323125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.896 [2024-10-25 17:57:26.323244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.896 [2024-10-25 17:57:26.323285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:07.896 [2024-10-25 17:57:26.323316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.896 [2024-10-25 17:57:26.323828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.896 [2024-10-25 17:57:26.323914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.896 [2024-10-25 17:57:26.324039] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:07.896 [2024-10-25 17:57:26.324098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.896 pt2 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.896 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.896 [2024-10-25 17:57:26.331102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.154 "name": "raid_bdev1", 00:16:08.154 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:08.154 "strip_size_kb": 64, 00:16:08.154 "state": "configuring", 00:16:08.154 "raid_level": "raid5f", 00:16:08.154 "superblock": true, 00:16:08.154 "num_base_bdevs": 4, 00:16:08.154 "num_base_bdevs_discovered": 1, 00:16:08.154 "num_base_bdevs_operational": 4, 00:16:08.154 "base_bdevs_list": [ 00:16:08.154 { 00:16:08.154 "name": "pt1", 00:16:08.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.154 "is_configured": true, 00:16:08.154 "data_offset": 2048, 00:16:08.154 "data_size": 63488 00:16:08.154 }, 00:16:08.154 { 00:16:08.154 "name": null, 00:16:08.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.154 "is_configured": false, 00:16:08.154 "data_offset": 0, 00:16:08.154 "data_size": 63488 00:16:08.154 }, 00:16:08.154 { 00:16:08.154 "name": null, 00:16:08.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.154 "is_configured": false, 00:16:08.154 "data_offset": 2048, 00:16:08.154 "data_size": 63488 00:16:08.154 }, 00:16:08.154 { 00:16:08.154 "name": null, 00:16:08.154 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.154 "is_configured": false, 00:16:08.154 "data_offset": 2048, 00:16:08.154 "data_size": 63488 00:16:08.154 } 00:16:08.154 ] 00:16:08.154 }' 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.154 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.414 [2024-10-25 17:57:26.762390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.414 [2024-10-25 17:57:26.762525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.414 [2024-10-25 17:57:26.762567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:08.414 [2024-10-25 17:57:26.762598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.414 [2024-10-25 17:57:26.763127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.414 [2024-10-25 17:57:26.763191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.414 [2024-10-25 17:57:26.763312] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.414 [2024-10-25 17:57:26.763341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.414 pt2 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.414 [2024-10-25 17:57:26.774329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:08.414 [2024-10-25 17:57:26.774378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.414 [2024-10-25 17:57:26.774397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:08.414 [2024-10-25 17:57:26.774405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.414 [2024-10-25 17:57:26.774789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.414 [2024-10-25 17:57:26.774805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:08.414 [2024-10-25 17:57:26.774890] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:08.414 [2024-10-25 17:57:26.774909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:08.414 pt3 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.414 [2024-10-25 17:57:26.786280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:08.414 [2024-10-25 17:57:26.786327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.414 [2024-10-25 17:57:26.786362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:08.414 [2024-10-25 17:57:26.786369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.414 [2024-10-25 17:57:26.786745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.414 [2024-10-25 17:57:26.786761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:08.414 [2024-10-25 17:57:26.786820] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:08.414 [2024-10-25 17:57:26.786837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:08.414 [2024-10-25 17:57:26.787010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:08.414 [2024-10-25 17:57:26.787019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:08.414 [2024-10-25 17:57:26.787258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:08.414 [2024-10-25 17:57:26.794577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:08.414 [2024-10-25 17:57:26.794598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:08.414 [2024-10-25 17:57:26.794771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.414 pt4 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.414 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.673 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.673 "name": "raid_bdev1", 00:16:08.673 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:08.673 "strip_size_kb": 64, 00:16:08.673 "state": "online", 00:16:08.673 "raid_level": "raid5f", 00:16:08.673 "superblock": true, 00:16:08.673 "num_base_bdevs": 4, 00:16:08.673 "num_base_bdevs_discovered": 4, 00:16:08.673 "num_base_bdevs_operational": 4, 00:16:08.673 "base_bdevs_list": [ 00:16:08.673 { 00:16:08.673 "name": "pt1", 00:16:08.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.673 "is_configured": true, 00:16:08.673 "data_offset": 2048, 00:16:08.673 "data_size": 63488 00:16:08.673 }, 00:16:08.673 { 00:16:08.673 "name": "pt2", 00:16:08.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.673 "is_configured": true, 00:16:08.673 "data_offset": 2048, 00:16:08.673 "data_size": 63488 00:16:08.673 }, 00:16:08.673 { 00:16:08.673 "name": "pt3", 00:16:08.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.673 "is_configured": true, 00:16:08.673 "data_offset": 2048, 00:16:08.673 "data_size": 63488 00:16:08.673 }, 00:16:08.673 { 00:16:08.673 "name": "pt4", 00:16:08.673 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.673 "is_configured": true, 00:16:08.673 "data_offset": 2048, 00:16:08.673 "data_size": 63488 00:16:08.673 } 00:16:08.673 ] 00:16:08.673 }' 00:16:08.673 17:57:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.673 17:57:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.932 [2024-10-25 17:57:27.255472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.932 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.932 "name": "raid_bdev1", 00:16:08.932 "aliases": [ 00:16:08.932 "d6a76db0-c99d-4dac-a8db-9fe581cd1c29" 00:16:08.932 ], 00:16:08.932 "product_name": "Raid Volume", 00:16:08.932 "block_size": 512, 00:16:08.932 "num_blocks": 190464, 00:16:08.932 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:08.932 "assigned_rate_limits": { 00:16:08.932 "rw_ios_per_sec": 0, 00:16:08.932 "rw_mbytes_per_sec": 0, 00:16:08.932 "r_mbytes_per_sec": 0, 00:16:08.932 "w_mbytes_per_sec": 0 00:16:08.932 }, 00:16:08.932 "claimed": false, 00:16:08.932 "zoned": false, 00:16:08.932 "supported_io_types": { 00:16:08.932 "read": true, 00:16:08.932 "write": true, 00:16:08.932 "unmap": false, 00:16:08.932 "flush": false, 00:16:08.932 "reset": true, 00:16:08.932 "nvme_admin": false, 00:16:08.932 "nvme_io": false, 00:16:08.932 "nvme_io_md": false, 00:16:08.932 "write_zeroes": true, 00:16:08.932 "zcopy": false, 00:16:08.932 "get_zone_info": false, 00:16:08.932 "zone_management": false, 00:16:08.932 "zone_append": false, 00:16:08.932 "compare": false, 00:16:08.932 "compare_and_write": false, 00:16:08.932 "abort": false, 00:16:08.932 "seek_hole": false, 00:16:08.932 "seek_data": false, 00:16:08.932 "copy": false, 00:16:08.932 "nvme_iov_md": false 00:16:08.932 }, 00:16:08.932 "driver_specific": { 00:16:08.932 "raid": { 00:16:08.932 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:08.932 "strip_size_kb": 64, 00:16:08.932 "state": "online", 00:16:08.932 "raid_level": "raid5f", 00:16:08.932 "superblock": true, 00:16:08.932 "num_base_bdevs": 4, 00:16:08.932 "num_base_bdevs_discovered": 4, 00:16:08.932 "num_base_bdevs_operational": 4, 00:16:08.932 "base_bdevs_list": [ 00:16:08.932 { 00:16:08.932 "name": "pt1", 00:16:08.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.932 "is_configured": true, 00:16:08.933 "data_offset": 2048, 00:16:08.933 "data_size": 63488 00:16:08.933 }, 00:16:08.933 { 00:16:08.933 "name": "pt2", 00:16:08.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.933 "is_configured": true, 00:16:08.933 "data_offset": 2048, 00:16:08.933 "data_size": 63488 00:16:08.933 }, 00:16:08.933 { 00:16:08.933 "name": "pt3", 00:16:08.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.933 "is_configured": true, 00:16:08.933 "data_offset": 2048, 00:16:08.933 "data_size": 63488 00:16:08.933 }, 00:16:08.933 { 00:16:08.933 "name": "pt4", 00:16:08.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.933 "is_configured": true, 00:16:08.933 "data_offset": 2048, 00:16:08.933 "data_size": 63488 00:16:08.933 } 00:16:08.933 ] 00:16:08.933 } 00:16:08.933 } 00:16:08.933 }' 00:16:08.933 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.933 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:08.933 pt2 00:16:08.933 pt3 00:16:08.933 pt4' 00:16:08.933 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.191 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.192 [2024-10-25 17:57:27.590945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.192 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d6a76db0-c99d-4dac-a8db-9fe581cd1c29 '!=' d6a76db0-c99d-4dac-a8db-9fe581cd1c29 ']' 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.450 [2024-10-25 17:57:27.634695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.450 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.450 "name": "raid_bdev1", 00:16:09.450 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:09.450 "strip_size_kb": 64, 00:16:09.450 "state": "online", 00:16:09.450 "raid_level": "raid5f", 00:16:09.450 "superblock": true, 00:16:09.450 "num_base_bdevs": 4, 00:16:09.450 "num_base_bdevs_discovered": 3, 00:16:09.450 "num_base_bdevs_operational": 3, 00:16:09.450 "base_bdevs_list": [ 00:16:09.450 { 00:16:09.450 "name": null, 00:16:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.450 "is_configured": false, 00:16:09.450 "data_offset": 0, 00:16:09.450 "data_size": 63488 00:16:09.450 }, 00:16:09.450 { 00:16:09.450 "name": "pt2", 00:16:09.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.450 "is_configured": true, 00:16:09.450 "data_offset": 2048, 00:16:09.450 "data_size": 63488 00:16:09.450 }, 00:16:09.450 { 00:16:09.450 "name": "pt3", 00:16:09.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.451 "is_configured": true, 00:16:09.451 "data_offset": 2048, 00:16:09.451 "data_size": 63488 00:16:09.451 }, 00:16:09.451 { 00:16:09.451 "name": "pt4", 00:16:09.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.451 "is_configured": true, 00:16:09.451 "data_offset": 2048, 00:16:09.451 "data_size": 63488 00:16:09.451 } 00:16:09.451 ] 00:16:09.451 }' 00:16:09.451 17:57:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.451 17:57:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 [2024-10-25 17:57:28.057918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.709 [2024-10-25 17:57:28.057954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.709 [2024-10-25 17:57:28.058051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.709 [2024-10-25 17:57:28.058138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.709 [2024-10-25 17:57:28.058149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.709 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 [2024-10-25 17:57:28.153766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.967 [2024-10-25 17:57:28.153925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.967 [2024-10-25 17:57:28.153972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:09.967 [2024-10-25 17:57:28.154019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.967 [2024-10-25 17:57:28.156466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.967 [2024-10-25 17:57:28.156549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.967 [2024-10-25 17:57:28.156699] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:09.967 [2024-10-25 17:57:28.156784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.967 pt2 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.967 "name": "raid_bdev1", 00:16:09.967 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:09.967 "strip_size_kb": 64, 00:16:09.967 "state": "configuring", 00:16:09.967 "raid_level": "raid5f", 00:16:09.967 "superblock": true, 00:16:09.967 "num_base_bdevs": 4, 00:16:09.967 "num_base_bdevs_discovered": 1, 00:16:09.967 "num_base_bdevs_operational": 3, 00:16:09.967 "base_bdevs_list": [ 00:16:09.967 { 00:16:09.967 "name": null, 00:16:09.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.967 "is_configured": false, 00:16:09.967 "data_offset": 2048, 00:16:09.967 "data_size": 63488 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": "pt2", 00:16:09.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.967 "is_configured": true, 00:16:09.967 "data_offset": 2048, 00:16:09.967 "data_size": 63488 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": null, 00:16:09.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.967 "is_configured": false, 00:16:09.967 "data_offset": 2048, 00:16:09.967 "data_size": 63488 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": null, 00:16:09.967 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.967 "is_configured": false, 00:16:09.967 "data_offset": 2048, 00:16:09.967 "data_size": 63488 00:16:09.967 } 00:16:09.967 ] 00:16:09.967 }' 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.967 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.227 [2024-10-25 17:57:28.569052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:10.227 [2024-10-25 17:57:28.569192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.227 [2024-10-25 17:57:28.569224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:10.227 [2024-10-25 17:57:28.569236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.227 [2024-10-25 17:57:28.569732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.227 [2024-10-25 17:57:28.569758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:10.227 [2024-10-25 17:57:28.569866] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:10.227 [2024-10-25 17:57:28.569899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:10.227 pt3 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.227 "name": "raid_bdev1", 00:16:10.227 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:10.227 "strip_size_kb": 64, 00:16:10.227 "state": "configuring", 00:16:10.227 "raid_level": "raid5f", 00:16:10.227 "superblock": true, 00:16:10.227 "num_base_bdevs": 4, 00:16:10.227 "num_base_bdevs_discovered": 2, 00:16:10.227 "num_base_bdevs_operational": 3, 00:16:10.227 "base_bdevs_list": [ 00:16:10.227 { 00:16:10.227 "name": null, 00:16:10.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.227 "is_configured": false, 00:16:10.227 "data_offset": 2048, 00:16:10.227 "data_size": 63488 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "pt2", 00:16:10.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 2048, 00:16:10.227 "data_size": 63488 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": "pt3", 00:16:10.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.227 "is_configured": true, 00:16:10.227 "data_offset": 2048, 00:16:10.227 "data_size": 63488 00:16:10.227 }, 00:16:10.227 { 00:16:10.227 "name": null, 00:16:10.227 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.227 "is_configured": false, 00:16:10.227 "data_offset": 2048, 00:16:10.227 "data_size": 63488 00:16:10.227 } 00:16:10.227 ] 00:16:10.227 }' 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.227 17:57:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 [2024-10-25 17:57:29.020316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:10.796 [2024-10-25 17:57:29.020437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.796 [2024-10-25 17:57:29.020480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:10.796 [2024-10-25 17:57:29.020508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.796 [2024-10-25 17:57:29.021061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.796 [2024-10-25 17:57:29.021132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:10.796 [2024-10-25 17:57:29.021273] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:10.796 [2024-10-25 17:57:29.021331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:10.796 [2024-10-25 17:57:29.021501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:10.796 [2024-10-25 17:57:29.021541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:10.796 [2024-10-25 17:57:29.021878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:10.796 [2024-10-25 17:57:29.029219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:10.796 [2024-10-25 17:57:29.029281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:10.796 [2024-10-25 17:57:29.029633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.796 pt4 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.796 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.796 "name": "raid_bdev1", 00:16:10.796 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:10.796 "strip_size_kb": 64, 00:16:10.797 "state": "online", 00:16:10.797 "raid_level": "raid5f", 00:16:10.797 "superblock": true, 00:16:10.797 "num_base_bdevs": 4, 00:16:10.797 "num_base_bdevs_discovered": 3, 00:16:10.797 "num_base_bdevs_operational": 3, 00:16:10.797 "base_bdevs_list": [ 00:16:10.797 { 00:16:10.797 "name": null, 00:16:10.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.797 "is_configured": false, 00:16:10.797 "data_offset": 2048, 00:16:10.797 "data_size": 63488 00:16:10.797 }, 00:16:10.797 { 00:16:10.797 "name": "pt2", 00:16:10.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.797 "is_configured": true, 00:16:10.797 "data_offset": 2048, 00:16:10.797 "data_size": 63488 00:16:10.797 }, 00:16:10.797 { 00:16:10.797 "name": "pt3", 00:16:10.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.797 "is_configured": true, 00:16:10.797 "data_offset": 2048, 00:16:10.797 "data_size": 63488 00:16:10.797 }, 00:16:10.797 { 00:16:10.797 "name": "pt4", 00:16:10.797 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.797 "is_configured": true, 00:16:10.797 "data_offset": 2048, 00:16:10.797 "data_size": 63488 00:16:10.797 } 00:16:10.797 ] 00:16:10.797 }' 00:16:10.797 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.797 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.056 [2024-10-25 17:57:29.470686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.056 [2024-10-25 17:57:29.470776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.056 [2024-10-25 17:57:29.470905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.056 [2024-10-25 17:57:29.471017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.056 [2024-10-25 17:57:29.471034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.056 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.362 [2024-10-25 17:57:29.534561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:11.362 [2024-10-25 17:57:29.534694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.362 [2024-10-25 17:57:29.534746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:11.362 [2024-10-25 17:57:29.534807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.362 [2024-10-25 17:57:29.537441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.362 [2024-10-25 17:57:29.537536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:11.362 [2024-10-25 17:57:29.537691] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:11.362 [2024-10-25 17:57:29.537798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.362 [2024-10-25 17:57:29.538011] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:11.362 [2024-10-25 17:57:29.538082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.362 [2024-10-25 17:57:29.538136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:11.362 [2024-10-25 17:57:29.538271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.362 [2024-10-25 17:57:29.538447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:11.362 pt1 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.362 "name": "raid_bdev1", 00:16:11.362 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:11.362 "strip_size_kb": 64, 00:16:11.362 "state": "configuring", 00:16:11.362 "raid_level": "raid5f", 00:16:11.362 "superblock": true, 00:16:11.362 "num_base_bdevs": 4, 00:16:11.362 "num_base_bdevs_discovered": 2, 00:16:11.362 "num_base_bdevs_operational": 3, 00:16:11.362 "base_bdevs_list": [ 00:16:11.362 { 00:16:11.362 "name": null, 00:16:11.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.362 "is_configured": false, 00:16:11.362 "data_offset": 2048, 00:16:11.362 "data_size": 63488 00:16:11.362 }, 00:16:11.362 { 00:16:11.362 "name": "pt2", 00:16:11.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.362 "is_configured": true, 00:16:11.362 "data_offset": 2048, 00:16:11.362 "data_size": 63488 00:16:11.362 }, 00:16:11.362 { 00:16:11.362 "name": "pt3", 00:16:11.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.362 "is_configured": true, 00:16:11.362 "data_offset": 2048, 00:16:11.362 "data_size": 63488 00:16:11.362 }, 00:16:11.362 { 00:16:11.362 "name": null, 00:16:11.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:11.362 "is_configured": false, 00:16:11.362 "data_offset": 2048, 00:16:11.362 "data_size": 63488 00:16:11.362 } 00:16:11.362 ] 00:16:11.362 }' 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.362 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.632 17:57:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 [2024-10-25 17:57:30.009939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:11.632 [2024-10-25 17:57:30.010017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.632 [2024-10-25 17:57:30.010050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:11.632 [2024-10-25 17:57:30.010062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.632 [2024-10-25 17:57:30.010601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.632 [2024-10-25 17:57:30.010621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:11.632 [2024-10-25 17:57:30.010722] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:11.632 [2024-10-25 17:57:30.010758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:11.632 [2024-10-25 17:57:30.010972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:11.632 [2024-10-25 17:57:30.010984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.632 [2024-10-25 17:57:30.011300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:11.632 [2024-10-25 17:57:30.020662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:11.632 [2024-10-25 17:57:30.020694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:11.632 [2024-10-25 17:57:30.021046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.632 pt4 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.890 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.890 "name": "raid_bdev1", 00:16:11.890 "uuid": "d6a76db0-c99d-4dac-a8db-9fe581cd1c29", 00:16:11.890 "strip_size_kb": 64, 00:16:11.890 "state": "online", 00:16:11.890 "raid_level": "raid5f", 00:16:11.890 "superblock": true, 00:16:11.890 "num_base_bdevs": 4, 00:16:11.890 "num_base_bdevs_discovered": 3, 00:16:11.890 "num_base_bdevs_operational": 3, 00:16:11.890 "base_bdevs_list": [ 00:16:11.890 { 00:16:11.890 "name": null, 00:16:11.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.890 "is_configured": false, 00:16:11.890 "data_offset": 2048, 00:16:11.890 "data_size": 63488 00:16:11.890 }, 00:16:11.890 { 00:16:11.890 "name": "pt2", 00:16:11.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.890 "is_configured": true, 00:16:11.890 "data_offset": 2048, 00:16:11.890 "data_size": 63488 00:16:11.890 }, 00:16:11.890 { 00:16:11.890 "name": "pt3", 00:16:11.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.890 "is_configured": true, 00:16:11.890 "data_offset": 2048, 00:16:11.890 "data_size": 63488 00:16:11.890 }, 00:16:11.890 { 00:16:11.890 "name": "pt4", 00:16:11.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:11.890 "is_configured": true, 00:16:11.890 "data_offset": 2048, 00:16:11.890 "data_size": 63488 00:16:11.890 } 00:16:11.890 ] 00:16:11.890 }' 00:16:11.890 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.890 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:12.149 [2024-10-25 17:57:30.511572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d6a76db0-c99d-4dac-a8db-9fe581cd1c29 '!=' d6a76db0-c99d-4dac-a8db-9fe581cd1c29 ']' 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84021 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84021 ']' 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84021 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.149 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84021 00:16:12.407 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.407 killing process with pid 84021 00:16:12.407 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.407 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84021' 00:16:12.407 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84021 00:16:12.407 [2024-10-25 17:57:30.599479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.407 [2024-10-25 17:57:30.599591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.407 17:57:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84021 00:16:12.407 [2024-10-25 17:57:30.599679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.407 [2024-10-25 17:57:30.599692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:12.665 [2024-10-25 17:57:31.005402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.040 ************************************ 00:16:14.040 END TEST raid5f_superblock_test 00:16:14.040 ************************************ 00:16:14.040 17:57:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:14.040 00:16:14.040 real 0m8.547s 00:16:14.040 user 0m13.406s 00:16:14.040 sys 0m1.591s 00:16:14.040 17:57:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.040 17:57:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.040 17:57:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:14.040 17:57:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:14.040 17:57:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:14.040 17:57:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.040 17:57:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.040 ************************************ 00:16:14.040 START TEST raid5f_rebuild_test 00:16:14.040 ************************************ 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84505 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84505 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84505 ']' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.040 17:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.040 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.040 Zero copy mechanism will not be used. 00:16:14.040 [2024-10-25 17:57:32.293500] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:16:14.040 [2024-10-25 17:57:32.293620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84505 ] 00:16:14.040 [2024-10-25 17:57:32.471698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.298 [2024-10-25 17:57:32.583931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.557 [2024-10-25 17:57:32.788038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.557 [2024-10-25 17:57:32.788204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.815 BaseBdev1_malloc 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.815 [2024-10-25 17:57:33.221298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:14.815 [2024-10-25 17:57:33.221364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.815 [2024-10-25 17:57:33.221384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:14.815 [2024-10-25 17:57:33.221395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.815 [2024-10-25 17:57:33.223498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.815 [2024-10-25 17:57:33.223576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.815 BaseBdev1 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.815 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 BaseBdev2_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 [2024-10-25 17:57:33.273021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.073 [2024-10-25 17:57:33.273133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.073 [2024-10-25 17:57:33.273157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.073 [2024-10-25 17:57:33.273171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.073 [2024-10-25 17:57:33.275262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.073 [2024-10-25 17:57:33.275299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.073 BaseBdev2 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 BaseBdev3_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 [2024-10-25 17:57:33.337539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:15.073 [2024-10-25 17:57:33.337603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.073 [2024-10-25 17:57:33.337625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:15.073 [2024-10-25 17:57:33.337636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.073 [2024-10-25 17:57:33.339981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.073 [2024-10-25 17:57:33.340023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:15.073 BaseBdev3 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 BaseBdev4_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 [2024-10-25 17:57:33.391449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:15.073 [2024-10-25 17:57:33.391506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.073 [2024-10-25 17:57:33.391544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.073 [2024-10-25 17:57:33.391555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.073 [2024-10-25 17:57:33.393787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.073 [2024-10-25 17:57:33.393838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:15.073 BaseBdev4 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 spare_malloc 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 spare_delay 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.073 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.074 [2024-10-25 17:57:33.448235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.074 [2024-10-25 17:57:33.448294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.074 [2024-10-25 17:57:33.448314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:15.074 [2024-10-25 17:57:33.448325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.074 [2024-10-25 17:57:33.450409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.074 [2024-10-25 17:57:33.450514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.074 spare 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.074 [2024-10-25 17:57:33.456301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.074 [2024-10-25 17:57:33.458137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.074 [2024-10-25 17:57:33.458200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.074 [2024-10-25 17:57:33.458252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.074 [2024-10-25 17:57:33.458342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.074 [2024-10-25 17:57:33.458354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:15.074 [2024-10-25 17:57:33.458611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:15.074 [2024-10-25 17:57:33.466494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.074 [2024-10-25 17:57:33.466517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.074 [2024-10-25 17:57:33.466752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.074 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.332 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.332 "name": "raid_bdev1", 00:16:15.332 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:15.332 "strip_size_kb": 64, 00:16:15.332 "state": "online", 00:16:15.332 "raid_level": "raid5f", 00:16:15.332 "superblock": false, 00:16:15.332 "num_base_bdevs": 4, 00:16:15.332 "num_base_bdevs_discovered": 4, 00:16:15.332 "num_base_bdevs_operational": 4, 00:16:15.332 "base_bdevs_list": [ 00:16:15.332 { 00:16:15.332 "name": "BaseBdev1", 00:16:15.332 "uuid": "a16b744b-044d-5f44-b367-624c47a6e496", 00:16:15.332 "is_configured": true, 00:16:15.332 "data_offset": 0, 00:16:15.332 "data_size": 65536 00:16:15.332 }, 00:16:15.332 { 00:16:15.332 "name": "BaseBdev2", 00:16:15.332 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:15.332 "is_configured": true, 00:16:15.332 "data_offset": 0, 00:16:15.332 "data_size": 65536 00:16:15.332 }, 00:16:15.332 { 00:16:15.332 "name": "BaseBdev3", 00:16:15.332 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:15.332 "is_configured": true, 00:16:15.332 "data_offset": 0, 00:16:15.332 "data_size": 65536 00:16:15.332 }, 00:16:15.332 { 00:16:15.332 "name": "BaseBdev4", 00:16:15.332 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:15.332 "is_configured": true, 00:16:15.332 "data_offset": 0, 00:16:15.332 "data_size": 65536 00:16:15.332 } 00:16:15.332 ] 00:16:15.332 }' 00:16:15.332 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.332 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.590 [2024-10-25 17:57:33.903357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.590 17:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:15.849 [2024-10-25 17:57:34.186702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:15.849 /dev/nbd0 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.849 1+0 records in 00:16:15.849 1+0 records out 00:16:15.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351563 s, 11.7 MB/s 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:15.849 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:16.418 512+0 records in 00:16:16.418 512+0 records out 00:16:16.418 100663296 bytes (101 MB, 96 MiB) copied, 0.52625 s, 191 MB/s 00:16:16.418 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:16.418 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.418 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.418 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.418 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:16.419 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.419 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.678 17:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.678 [2024-10-25 17:57:35.000221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.678 [2024-10-25 17:57:35.019920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.678 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.678 "name": "raid_bdev1", 00:16:16.678 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:16.679 "strip_size_kb": 64, 00:16:16.679 "state": "online", 00:16:16.679 "raid_level": "raid5f", 00:16:16.679 "superblock": false, 00:16:16.679 "num_base_bdevs": 4, 00:16:16.679 "num_base_bdevs_discovered": 3, 00:16:16.679 "num_base_bdevs_operational": 3, 00:16:16.679 "base_bdevs_list": [ 00:16:16.679 { 00:16:16.679 "name": null, 00:16:16.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.679 "is_configured": false, 00:16:16.679 "data_offset": 0, 00:16:16.679 "data_size": 65536 00:16:16.679 }, 00:16:16.679 { 00:16:16.679 "name": "BaseBdev2", 00:16:16.679 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:16.679 "is_configured": true, 00:16:16.679 "data_offset": 0, 00:16:16.679 "data_size": 65536 00:16:16.679 }, 00:16:16.679 { 00:16:16.679 "name": "BaseBdev3", 00:16:16.679 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:16.679 "is_configured": true, 00:16:16.679 "data_offset": 0, 00:16:16.679 "data_size": 65536 00:16:16.679 }, 00:16:16.679 { 00:16:16.679 "name": "BaseBdev4", 00:16:16.679 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:16.679 "is_configured": true, 00:16:16.679 "data_offset": 0, 00:16:16.679 "data_size": 65536 00:16:16.679 } 00:16:16.679 ] 00:16:16.679 }' 00:16:16.679 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.679 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.247 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.247 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.247 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.247 [2024-10-25 17:57:35.487104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.247 [2024-10-25 17:57:35.505138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:17.247 17:57:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.247 17:57:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:17.247 [2024-10-25 17:57:35.516979] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.184 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.184 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.184 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.185 "name": "raid_bdev1", 00:16:18.185 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:18.185 "strip_size_kb": 64, 00:16:18.185 "state": "online", 00:16:18.185 "raid_level": "raid5f", 00:16:18.185 "superblock": false, 00:16:18.185 "num_base_bdevs": 4, 00:16:18.185 "num_base_bdevs_discovered": 4, 00:16:18.185 "num_base_bdevs_operational": 4, 00:16:18.185 "process": { 00:16:18.185 "type": "rebuild", 00:16:18.185 "target": "spare", 00:16:18.185 "progress": { 00:16:18.185 "blocks": 17280, 00:16:18.185 "percent": 8 00:16:18.185 } 00:16:18.185 }, 00:16:18.185 "base_bdevs_list": [ 00:16:18.185 { 00:16:18.185 "name": "spare", 00:16:18.185 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:18.185 "is_configured": true, 00:16:18.185 "data_offset": 0, 00:16:18.185 "data_size": 65536 00:16:18.185 }, 00:16:18.185 { 00:16:18.185 "name": "BaseBdev2", 00:16:18.185 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:18.185 "is_configured": true, 00:16:18.185 "data_offset": 0, 00:16:18.185 "data_size": 65536 00:16:18.185 }, 00:16:18.185 { 00:16:18.185 "name": "BaseBdev3", 00:16:18.185 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:18.185 "is_configured": true, 00:16:18.185 "data_offset": 0, 00:16:18.185 "data_size": 65536 00:16:18.185 }, 00:16:18.185 { 00:16:18.185 "name": "BaseBdev4", 00:16:18.185 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:18.185 "is_configured": true, 00:16:18.185 "data_offset": 0, 00:16:18.185 "data_size": 65536 00:16:18.185 } 00:16:18.185 ] 00:16:18.185 }' 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.185 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 [2024-10-25 17:57:36.671919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.444 [2024-10-25 17:57:36.726101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.444 [2024-10-25 17:57:36.726194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.444 [2024-10-25 17:57:36.726212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.444 [2024-10-25 17:57:36.726222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.444 "name": "raid_bdev1", 00:16:18.444 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:18.444 "strip_size_kb": 64, 00:16:18.444 "state": "online", 00:16:18.444 "raid_level": "raid5f", 00:16:18.444 "superblock": false, 00:16:18.444 "num_base_bdevs": 4, 00:16:18.444 "num_base_bdevs_discovered": 3, 00:16:18.444 "num_base_bdevs_operational": 3, 00:16:18.444 "base_bdevs_list": [ 00:16:18.444 { 00:16:18.444 "name": null, 00:16:18.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.444 "is_configured": false, 00:16:18.444 "data_offset": 0, 00:16:18.444 "data_size": 65536 00:16:18.444 }, 00:16:18.444 { 00:16:18.444 "name": "BaseBdev2", 00:16:18.444 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:18.444 "is_configured": true, 00:16:18.444 "data_offset": 0, 00:16:18.444 "data_size": 65536 00:16:18.444 }, 00:16:18.444 { 00:16:18.444 "name": "BaseBdev3", 00:16:18.444 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:18.444 "is_configured": true, 00:16:18.444 "data_offset": 0, 00:16:18.444 "data_size": 65536 00:16:18.444 }, 00:16:18.444 { 00:16:18.444 "name": "BaseBdev4", 00:16:18.444 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:18.444 "is_configured": true, 00:16:18.444 "data_offset": 0, 00:16:18.444 "data_size": 65536 00:16:18.444 } 00:16:18.444 ] 00:16:18.444 }' 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.444 17:57:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.011 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.011 "name": "raid_bdev1", 00:16:19.011 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:19.011 "strip_size_kb": 64, 00:16:19.011 "state": "online", 00:16:19.011 "raid_level": "raid5f", 00:16:19.011 "superblock": false, 00:16:19.011 "num_base_bdevs": 4, 00:16:19.011 "num_base_bdevs_discovered": 3, 00:16:19.011 "num_base_bdevs_operational": 3, 00:16:19.011 "base_bdevs_list": [ 00:16:19.011 { 00:16:19.011 "name": null, 00:16:19.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.011 "is_configured": false, 00:16:19.011 "data_offset": 0, 00:16:19.011 "data_size": 65536 00:16:19.011 }, 00:16:19.011 { 00:16:19.011 "name": "BaseBdev2", 00:16:19.011 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:19.011 "is_configured": true, 00:16:19.011 "data_offset": 0, 00:16:19.011 "data_size": 65536 00:16:19.011 }, 00:16:19.011 { 00:16:19.011 "name": "BaseBdev3", 00:16:19.011 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:19.012 "is_configured": true, 00:16:19.012 "data_offset": 0, 00:16:19.012 "data_size": 65536 00:16:19.012 }, 00:16:19.012 { 00:16:19.012 "name": "BaseBdev4", 00:16:19.012 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:19.012 "is_configured": true, 00:16:19.012 "data_offset": 0, 00:16:19.012 "data_size": 65536 00:16:19.012 } 00:16:19.012 ] 00:16:19.012 }' 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.012 [2024-10-25 17:57:37.368336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.012 [2024-10-25 17:57:37.383455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.012 17:57:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:19.012 [2024-10-25 17:57:37.393007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.390 "name": "raid_bdev1", 00:16:20.390 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:20.390 "strip_size_kb": 64, 00:16:20.390 "state": "online", 00:16:20.390 "raid_level": "raid5f", 00:16:20.390 "superblock": false, 00:16:20.390 "num_base_bdevs": 4, 00:16:20.390 "num_base_bdevs_discovered": 4, 00:16:20.390 "num_base_bdevs_operational": 4, 00:16:20.390 "process": { 00:16:20.390 "type": "rebuild", 00:16:20.390 "target": "spare", 00:16:20.390 "progress": { 00:16:20.390 "blocks": 19200, 00:16:20.390 "percent": 9 00:16:20.390 } 00:16:20.390 }, 00:16:20.390 "base_bdevs_list": [ 00:16:20.390 { 00:16:20.390 "name": "spare", 00:16:20.390 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev2", 00:16:20.390 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev3", 00:16:20.390 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev4", 00:16:20.390 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 } 00:16:20.390 ] 00:16:20.390 }' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=623 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.390 "name": "raid_bdev1", 00:16:20.390 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:20.390 "strip_size_kb": 64, 00:16:20.390 "state": "online", 00:16:20.390 "raid_level": "raid5f", 00:16:20.390 "superblock": false, 00:16:20.390 "num_base_bdevs": 4, 00:16:20.390 "num_base_bdevs_discovered": 4, 00:16:20.390 "num_base_bdevs_operational": 4, 00:16:20.390 "process": { 00:16:20.390 "type": "rebuild", 00:16:20.390 "target": "spare", 00:16:20.390 "progress": { 00:16:20.390 "blocks": 21120, 00:16:20.390 "percent": 10 00:16:20.390 } 00:16:20.390 }, 00:16:20.390 "base_bdevs_list": [ 00:16:20.390 { 00:16:20.390 "name": "spare", 00:16:20.390 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev2", 00:16:20.390 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev3", 00:16:20.390 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 }, 00:16:20.390 { 00:16:20.390 "name": "BaseBdev4", 00:16:20.390 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:20.390 "is_configured": true, 00:16:20.390 "data_offset": 0, 00:16:20.390 "data_size": 65536 00:16:20.390 } 00:16:20.390 ] 00:16:20.390 }' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.390 17:57:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.327 "name": "raid_bdev1", 00:16:21.327 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:21.327 "strip_size_kb": 64, 00:16:21.327 "state": "online", 00:16:21.327 "raid_level": "raid5f", 00:16:21.327 "superblock": false, 00:16:21.327 "num_base_bdevs": 4, 00:16:21.327 "num_base_bdevs_discovered": 4, 00:16:21.327 "num_base_bdevs_operational": 4, 00:16:21.327 "process": { 00:16:21.327 "type": "rebuild", 00:16:21.327 "target": "spare", 00:16:21.327 "progress": { 00:16:21.327 "blocks": 42240, 00:16:21.327 "percent": 21 00:16:21.327 } 00:16:21.327 }, 00:16:21.327 "base_bdevs_list": [ 00:16:21.327 { 00:16:21.327 "name": "spare", 00:16:21.327 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:21.327 "is_configured": true, 00:16:21.327 "data_offset": 0, 00:16:21.327 "data_size": 65536 00:16:21.327 }, 00:16:21.327 { 00:16:21.327 "name": "BaseBdev2", 00:16:21.327 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:21.327 "is_configured": true, 00:16:21.327 "data_offset": 0, 00:16:21.327 "data_size": 65536 00:16:21.327 }, 00:16:21.327 { 00:16:21.327 "name": "BaseBdev3", 00:16:21.327 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:21.327 "is_configured": true, 00:16:21.327 "data_offset": 0, 00:16:21.327 "data_size": 65536 00:16:21.327 }, 00:16:21.327 { 00:16:21.327 "name": "BaseBdev4", 00:16:21.327 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:21.327 "is_configured": true, 00:16:21.327 "data_offset": 0, 00:16:21.327 "data_size": 65536 00:16:21.327 } 00:16:21.327 ] 00:16:21.327 }' 00:16:21.327 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.586 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.586 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.586 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.586 17:57:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.521 "name": "raid_bdev1", 00:16:22.521 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:22.521 "strip_size_kb": 64, 00:16:22.521 "state": "online", 00:16:22.521 "raid_level": "raid5f", 00:16:22.521 "superblock": false, 00:16:22.521 "num_base_bdevs": 4, 00:16:22.521 "num_base_bdevs_discovered": 4, 00:16:22.521 "num_base_bdevs_operational": 4, 00:16:22.521 "process": { 00:16:22.521 "type": "rebuild", 00:16:22.521 "target": "spare", 00:16:22.521 "progress": { 00:16:22.521 "blocks": 65280, 00:16:22.521 "percent": 33 00:16:22.521 } 00:16:22.521 }, 00:16:22.521 "base_bdevs_list": [ 00:16:22.521 { 00:16:22.521 "name": "spare", 00:16:22.521 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:22.521 "is_configured": true, 00:16:22.521 "data_offset": 0, 00:16:22.521 "data_size": 65536 00:16:22.521 }, 00:16:22.521 { 00:16:22.521 "name": "BaseBdev2", 00:16:22.521 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:22.521 "is_configured": true, 00:16:22.521 "data_offset": 0, 00:16:22.521 "data_size": 65536 00:16:22.521 }, 00:16:22.521 { 00:16:22.521 "name": "BaseBdev3", 00:16:22.521 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:22.521 "is_configured": true, 00:16:22.521 "data_offset": 0, 00:16:22.521 "data_size": 65536 00:16:22.521 }, 00:16:22.521 { 00:16:22.521 "name": "BaseBdev4", 00:16:22.521 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:22.521 "is_configured": true, 00:16:22.521 "data_offset": 0, 00:16:22.521 "data_size": 65536 00:16:22.521 } 00:16:22.521 ] 00:16:22.521 }' 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.521 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.780 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.780 17:57:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.735 17:57:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.735 "name": "raid_bdev1", 00:16:23.735 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:23.735 "strip_size_kb": 64, 00:16:23.735 "state": "online", 00:16:23.735 "raid_level": "raid5f", 00:16:23.735 "superblock": false, 00:16:23.735 "num_base_bdevs": 4, 00:16:23.735 "num_base_bdevs_discovered": 4, 00:16:23.735 "num_base_bdevs_operational": 4, 00:16:23.735 "process": { 00:16:23.735 "type": "rebuild", 00:16:23.735 "target": "spare", 00:16:23.735 "progress": { 00:16:23.735 "blocks": 86400, 00:16:23.735 "percent": 43 00:16:23.735 } 00:16:23.735 }, 00:16:23.735 "base_bdevs_list": [ 00:16:23.735 { 00:16:23.735 "name": "spare", 00:16:23.735 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:23.735 "is_configured": true, 00:16:23.735 "data_offset": 0, 00:16:23.735 "data_size": 65536 00:16:23.735 }, 00:16:23.735 { 00:16:23.735 "name": "BaseBdev2", 00:16:23.735 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:23.735 "is_configured": true, 00:16:23.735 "data_offset": 0, 00:16:23.735 "data_size": 65536 00:16:23.735 }, 00:16:23.735 { 00:16:23.735 "name": "BaseBdev3", 00:16:23.735 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:23.735 "is_configured": true, 00:16:23.735 "data_offset": 0, 00:16:23.735 "data_size": 65536 00:16:23.735 }, 00:16:23.735 { 00:16:23.735 "name": "BaseBdev4", 00:16:23.735 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:23.735 "is_configured": true, 00:16:23.735 "data_offset": 0, 00:16:23.735 "data_size": 65536 00:16:23.735 } 00:16:23.735 ] 00:16:23.735 }' 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.735 17:57:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.115 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.115 "name": "raid_bdev1", 00:16:25.115 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:25.115 "strip_size_kb": 64, 00:16:25.115 "state": "online", 00:16:25.115 "raid_level": "raid5f", 00:16:25.115 "superblock": false, 00:16:25.115 "num_base_bdevs": 4, 00:16:25.115 "num_base_bdevs_discovered": 4, 00:16:25.115 "num_base_bdevs_operational": 4, 00:16:25.115 "process": { 00:16:25.115 "type": "rebuild", 00:16:25.115 "target": "spare", 00:16:25.115 "progress": { 00:16:25.115 "blocks": 109440, 00:16:25.115 "percent": 55 00:16:25.115 } 00:16:25.115 }, 00:16:25.115 "base_bdevs_list": [ 00:16:25.115 { 00:16:25.115 "name": "spare", 00:16:25.115 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:25.115 "is_configured": true, 00:16:25.115 "data_offset": 0, 00:16:25.115 "data_size": 65536 00:16:25.115 }, 00:16:25.115 { 00:16:25.115 "name": "BaseBdev2", 00:16:25.115 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:25.115 "is_configured": true, 00:16:25.115 "data_offset": 0, 00:16:25.115 "data_size": 65536 00:16:25.115 }, 00:16:25.115 { 00:16:25.115 "name": "BaseBdev3", 00:16:25.115 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:25.115 "is_configured": true, 00:16:25.115 "data_offset": 0, 00:16:25.115 "data_size": 65536 00:16:25.115 }, 00:16:25.115 { 00:16:25.115 "name": "BaseBdev4", 00:16:25.115 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:25.115 "is_configured": true, 00:16:25.116 "data_offset": 0, 00:16:25.116 "data_size": 65536 00:16:25.116 } 00:16:25.116 ] 00:16:25.116 }' 00:16:25.116 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.116 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.116 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.116 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.116 17:57:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.055 "name": "raid_bdev1", 00:16:26.055 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:26.055 "strip_size_kb": 64, 00:16:26.055 "state": "online", 00:16:26.055 "raid_level": "raid5f", 00:16:26.055 "superblock": false, 00:16:26.055 "num_base_bdevs": 4, 00:16:26.055 "num_base_bdevs_discovered": 4, 00:16:26.055 "num_base_bdevs_operational": 4, 00:16:26.055 "process": { 00:16:26.055 "type": "rebuild", 00:16:26.055 "target": "spare", 00:16:26.055 "progress": { 00:16:26.055 "blocks": 130560, 00:16:26.055 "percent": 66 00:16:26.055 } 00:16:26.055 }, 00:16:26.055 "base_bdevs_list": [ 00:16:26.055 { 00:16:26.055 "name": "spare", 00:16:26.055 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:26.055 "is_configured": true, 00:16:26.055 "data_offset": 0, 00:16:26.055 "data_size": 65536 00:16:26.055 }, 00:16:26.055 { 00:16:26.055 "name": "BaseBdev2", 00:16:26.055 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:26.055 "is_configured": true, 00:16:26.055 "data_offset": 0, 00:16:26.055 "data_size": 65536 00:16:26.055 }, 00:16:26.055 { 00:16:26.055 "name": "BaseBdev3", 00:16:26.055 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:26.055 "is_configured": true, 00:16:26.055 "data_offset": 0, 00:16:26.055 "data_size": 65536 00:16:26.055 }, 00:16:26.055 { 00:16:26.055 "name": "BaseBdev4", 00:16:26.055 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:26.055 "is_configured": true, 00:16:26.055 "data_offset": 0, 00:16:26.055 "data_size": 65536 00:16:26.055 } 00:16:26.055 ] 00:16:26.055 }' 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.055 17:57:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.993 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.993 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.252 "name": "raid_bdev1", 00:16:27.252 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:27.252 "strip_size_kb": 64, 00:16:27.252 "state": "online", 00:16:27.252 "raid_level": "raid5f", 00:16:27.252 "superblock": false, 00:16:27.252 "num_base_bdevs": 4, 00:16:27.252 "num_base_bdevs_discovered": 4, 00:16:27.252 "num_base_bdevs_operational": 4, 00:16:27.252 "process": { 00:16:27.252 "type": "rebuild", 00:16:27.252 "target": "spare", 00:16:27.252 "progress": { 00:16:27.252 "blocks": 151680, 00:16:27.252 "percent": 77 00:16:27.252 } 00:16:27.252 }, 00:16:27.252 "base_bdevs_list": [ 00:16:27.252 { 00:16:27.252 "name": "spare", 00:16:27.252 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:27.252 "is_configured": true, 00:16:27.252 "data_offset": 0, 00:16:27.252 "data_size": 65536 00:16:27.252 }, 00:16:27.252 { 00:16:27.252 "name": "BaseBdev2", 00:16:27.252 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:27.252 "is_configured": true, 00:16:27.252 "data_offset": 0, 00:16:27.252 "data_size": 65536 00:16:27.252 }, 00:16:27.252 { 00:16:27.252 "name": "BaseBdev3", 00:16:27.252 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:27.252 "is_configured": true, 00:16:27.252 "data_offset": 0, 00:16:27.252 "data_size": 65536 00:16:27.252 }, 00:16:27.252 { 00:16:27.252 "name": "BaseBdev4", 00:16:27.252 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:27.252 "is_configured": true, 00:16:27.252 "data_offset": 0, 00:16:27.252 "data_size": 65536 00:16:27.252 } 00:16:27.252 ] 00:16:27.252 }' 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.252 17:57:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.192 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.192 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.193 17:57:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.451 "name": "raid_bdev1", 00:16:28.451 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:28.451 "strip_size_kb": 64, 00:16:28.451 "state": "online", 00:16:28.451 "raid_level": "raid5f", 00:16:28.451 "superblock": false, 00:16:28.451 "num_base_bdevs": 4, 00:16:28.451 "num_base_bdevs_discovered": 4, 00:16:28.451 "num_base_bdevs_operational": 4, 00:16:28.451 "process": { 00:16:28.451 "type": "rebuild", 00:16:28.451 "target": "spare", 00:16:28.451 "progress": { 00:16:28.451 "blocks": 174720, 00:16:28.451 "percent": 88 00:16:28.451 } 00:16:28.451 }, 00:16:28.451 "base_bdevs_list": [ 00:16:28.451 { 00:16:28.451 "name": "spare", 00:16:28.451 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:28.451 "is_configured": true, 00:16:28.451 "data_offset": 0, 00:16:28.451 "data_size": 65536 00:16:28.451 }, 00:16:28.451 { 00:16:28.451 "name": "BaseBdev2", 00:16:28.451 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:28.451 "is_configured": true, 00:16:28.451 "data_offset": 0, 00:16:28.451 "data_size": 65536 00:16:28.451 }, 00:16:28.451 { 00:16:28.451 "name": "BaseBdev3", 00:16:28.451 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:28.451 "is_configured": true, 00:16:28.451 "data_offset": 0, 00:16:28.451 "data_size": 65536 00:16:28.451 }, 00:16:28.451 { 00:16:28.451 "name": "BaseBdev4", 00:16:28.451 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:28.451 "is_configured": true, 00:16:28.451 "data_offset": 0, 00:16:28.451 "data_size": 65536 00:16:28.451 } 00:16:28.451 ] 00:16:28.451 }' 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.451 17:57:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.424 [2024-10-25 17:57:47.763025] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:29.424 [2024-10-25 17:57:47.763167] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:29.424 [2024-10-25 17:57:47.763244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.424 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.424 "name": "raid_bdev1", 00:16:29.424 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:29.424 "strip_size_kb": 64, 00:16:29.424 "state": "online", 00:16:29.424 "raid_level": "raid5f", 00:16:29.424 "superblock": false, 00:16:29.424 "num_base_bdevs": 4, 00:16:29.424 "num_base_bdevs_discovered": 4, 00:16:29.425 "num_base_bdevs_operational": 4, 00:16:29.425 "base_bdevs_list": [ 00:16:29.425 { 00:16:29.425 "name": "spare", 00:16:29.425 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:29.425 "is_configured": true, 00:16:29.425 "data_offset": 0, 00:16:29.425 "data_size": 65536 00:16:29.425 }, 00:16:29.425 { 00:16:29.425 "name": "BaseBdev2", 00:16:29.425 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:29.425 "is_configured": true, 00:16:29.425 "data_offset": 0, 00:16:29.425 "data_size": 65536 00:16:29.425 }, 00:16:29.425 { 00:16:29.425 "name": "BaseBdev3", 00:16:29.425 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:29.425 "is_configured": true, 00:16:29.425 "data_offset": 0, 00:16:29.425 "data_size": 65536 00:16:29.425 }, 00:16:29.425 { 00:16:29.425 "name": "BaseBdev4", 00:16:29.425 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:29.425 "is_configured": true, 00:16:29.425 "data_offset": 0, 00:16:29.425 "data_size": 65536 00:16:29.425 } 00:16:29.425 ] 00:16:29.425 }' 00:16:29.425 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.685 "name": "raid_bdev1", 00:16:29.685 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:29.685 "strip_size_kb": 64, 00:16:29.685 "state": "online", 00:16:29.685 "raid_level": "raid5f", 00:16:29.685 "superblock": false, 00:16:29.685 "num_base_bdevs": 4, 00:16:29.685 "num_base_bdevs_discovered": 4, 00:16:29.685 "num_base_bdevs_operational": 4, 00:16:29.685 "base_bdevs_list": [ 00:16:29.685 { 00:16:29.685 "name": "spare", 00:16:29.685 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 65536 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev2", 00:16:29.685 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 65536 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev3", 00:16:29.685 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 65536 00:16:29.685 }, 00:16:29.685 { 00:16:29.685 "name": "BaseBdev4", 00:16:29.685 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 0, 00:16:29.685 "data_size": 65536 00:16:29.685 } 00:16:29.685 ] 00:16:29.685 }' 00:16:29.685 17:57:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.685 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.685 "name": "raid_bdev1", 00:16:29.685 "uuid": "eba83298-f719-40a0-8fbc-9c4a1bdf8b57", 00:16:29.685 "strip_size_kb": 64, 00:16:29.685 "state": "online", 00:16:29.685 "raid_level": "raid5f", 00:16:29.685 "superblock": false, 00:16:29.685 "num_base_bdevs": 4, 00:16:29.685 "num_base_bdevs_discovered": 4, 00:16:29.685 "num_base_bdevs_operational": 4, 00:16:29.685 "base_bdevs_list": [ 00:16:29.685 { 00:16:29.685 "name": "spare", 00:16:29.685 "uuid": "ef05299f-a65e-584c-8973-e323910c2ac0", 00:16:29.685 "is_configured": true, 00:16:29.685 "data_offset": 0, 00:16:29.686 "data_size": 65536 00:16:29.686 }, 00:16:29.686 { 00:16:29.686 "name": "BaseBdev2", 00:16:29.686 "uuid": "eff24e1b-a547-5336-bf10-c1d7776d4af2", 00:16:29.686 "is_configured": true, 00:16:29.686 "data_offset": 0, 00:16:29.686 "data_size": 65536 00:16:29.686 }, 00:16:29.686 { 00:16:29.686 "name": "BaseBdev3", 00:16:29.686 "uuid": "194cb8a7-513e-50cb-92fa-322dd6dd70ed", 00:16:29.686 "is_configured": true, 00:16:29.686 "data_offset": 0, 00:16:29.686 "data_size": 65536 00:16:29.686 }, 00:16:29.686 { 00:16:29.686 "name": "BaseBdev4", 00:16:29.686 "uuid": "3994bca3-364f-5903-b8dc-5435a3e6afe4", 00:16:29.686 "is_configured": true, 00:16:29.686 "data_offset": 0, 00:16:29.686 "data_size": 65536 00:16:29.686 } 00:16:29.686 ] 00:16:29.686 }' 00:16:29.686 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.686 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.255 [2024-10-25 17:57:48.519071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.255 [2024-10-25 17:57:48.519121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.255 [2024-10-25 17:57:48.519221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.255 [2024-10-25 17:57:48.519336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.255 [2024-10-25 17:57:48.519349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.255 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:30.514 /dev/nbd0 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.514 1+0 records in 00:16:30.514 1+0 records out 00:16:30.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479932 s, 8.5 MB/s 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.514 17:57:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:30.773 /dev/nbd1 00:16:30.773 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:30.773 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:30.773 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.774 1+0 records in 00:16:30.774 1+0 records out 00:16:30.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427794 s, 9.6 MB/s 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.774 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.032 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.292 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84505 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84505 ']' 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84505 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84505 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84505' 00:16:31.553 killing process with pid 84505 00:16:31.553 Received shutdown signal, test time was about 60.000000 seconds 00:16:31.553 00:16:31.553 Latency(us) 00:16:31.553 [2024-10-25T17:57:49.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.553 [2024-10-25T17:57:49.989Z] =================================================================================================================== 00:16:31.553 [2024-10-25T17:57:49.989Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84505 00:16:31.553 [2024-10-25 17:57:49.822747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.553 17:57:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84505 00:16:32.121 [2024-10-25 17:57:50.307405] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:33.060 00:16:33.060 real 0m19.206s 00:16:33.060 user 0m23.183s 00:16:33.060 sys 0m2.331s 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.060 ************************************ 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.060 END TEST raid5f_rebuild_test 00:16:33.060 ************************************ 00:16:33.060 17:57:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:33.060 17:57:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:33.060 17:57:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.060 17:57:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.060 ************************************ 00:16:33.060 START TEST raid5f_rebuild_test_sb 00:16:33.060 ************************************ 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:33.060 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85008 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85008 00:16:33.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85008 ']' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.061 17:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.321 [2024-10-25 17:57:51.574249] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:16:33.321 [2024-10-25 17:57:51.574489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:33.321 Zero copy mechanism will not be used. 00:16:33.321 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:16:33.321 [2024-10-25 17:57:51.734976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.581 [2024-10-25 17:57:51.858052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.843 [2024-10-25 17:57:52.063062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.843 [2024-10-25 17:57:52.063201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.105 BaseBdev1_malloc 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.105 [2024-10-25 17:57:52.453751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:34.105 [2024-10-25 17:57:52.453876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.105 [2024-10-25 17:57:52.453924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:34.105 [2024-10-25 17:57:52.453960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.105 [2024-10-25 17:57:52.456040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.105 [2024-10-25 17:57:52.456128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.105 BaseBdev1 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.105 BaseBdev2_malloc 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.105 [2024-10-25 17:57:52.509956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:34.105 [2024-10-25 17:57:52.510017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.105 [2024-10-25 17:57:52.510037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:34.105 [2024-10-25 17:57:52.510050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.105 [2024-10-25 17:57:52.512231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.105 [2024-10-25 17:57:52.512349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:34.105 BaseBdev2 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.105 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.365 BaseBdev3_malloc 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.365 [2024-10-25 17:57:52.574623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:34.365 [2024-10-25 17:57:52.574676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.365 [2024-10-25 17:57:52.574698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:34.365 [2024-10-25 17:57:52.574709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.365 [2024-10-25 17:57:52.576906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.365 [2024-10-25 17:57:52.576991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:34.365 BaseBdev3 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.365 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.365 BaseBdev4_malloc 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 [2024-10-25 17:57:52.628437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:34.366 [2024-10-25 17:57:52.628496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.366 [2024-10-25 17:57:52.628514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:34.366 [2024-10-25 17:57:52.628525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.366 [2024-10-25 17:57:52.630803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.366 [2024-10-25 17:57:52.630852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:34.366 BaseBdev4 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 spare_malloc 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 spare_delay 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 [2024-10-25 17:57:52.698931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.366 [2024-10-25 17:57:52.698988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.366 [2024-10-25 17:57:52.699007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:34.366 [2024-10-25 17:57:52.699018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.366 [2024-10-25 17:57:52.701060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.366 [2024-10-25 17:57:52.701158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.366 spare 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 [2024-10-25 17:57:52.711006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.366 [2024-10-25 17:57:52.712842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.366 [2024-10-25 17:57:52.712911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.366 [2024-10-25 17:57:52.712969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.366 [2024-10-25 17:57:52.713177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:34.366 [2024-10-25 17:57:52.713196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.366 [2024-10-25 17:57:52.713444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:34.366 [2024-10-25 17:57:52.720437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:34.366 [2024-10-25 17:57:52.720466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:34.366 [2024-10-25 17:57:52.720653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.366 "name": "raid_bdev1", 00:16:34.366 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:34.366 "strip_size_kb": 64, 00:16:34.366 "state": "online", 00:16:34.366 "raid_level": "raid5f", 00:16:34.366 "superblock": true, 00:16:34.366 "num_base_bdevs": 4, 00:16:34.366 "num_base_bdevs_discovered": 4, 00:16:34.366 "num_base_bdevs_operational": 4, 00:16:34.366 "base_bdevs_list": [ 00:16:34.366 { 00:16:34.366 "name": "BaseBdev1", 00:16:34.366 "uuid": "22a19960-85de-55c4-869e-c08b4efc0a40", 00:16:34.366 "is_configured": true, 00:16:34.366 "data_offset": 2048, 00:16:34.366 "data_size": 63488 00:16:34.366 }, 00:16:34.366 { 00:16:34.366 "name": "BaseBdev2", 00:16:34.366 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:34.366 "is_configured": true, 00:16:34.366 "data_offset": 2048, 00:16:34.366 "data_size": 63488 00:16:34.366 }, 00:16:34.366 { 00:16:34.366 "name": "BaseBdev3", 00:16:34.366 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:34.366 "is_configured": true, 00:16:34.366 "data_offset": 2048, 00:16:34.366 "data_size": 63488 00:16:34.366 }, 00:16:34.366 { 00:16:34.366 "name": "BaseBdev4", 00:16:34.366 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:34.366 "is_configured": true, 00:16:34.366 "data_offset": 2048, 00:16:34.366 "data_size": 63488 00:16:34.366 } 00:16:34.366 ] 00:16:34.366 }' 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.366 17:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:34.936 [2024-10-25 17:57:53.153189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.936 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:35.197 [2024-10-25 17:57:53.404669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:35.197 /dev/nbd0 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.197 1+0 records in 00:16:35.197 1+0 records out 00:16:35.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641788 s, 6.4 MB/s 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:35.197 17:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:35.766 496+0 records in 00:16:35.766 496+0 records out 00:16:35.766 97517568 bytes (98 MB, 93 MiB) copied, 0.520562 s, 187 MB/s 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.766 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:36.026 [2024-10-25 17:57:54.241162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.026 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:36.026 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:36.026 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.027 [2024-10-25 17:57:54.264505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.027 "name": "raid_bdev1", 00:16:36.027 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:36.027 "strip_size_kb": 64, 00:16:36.027 "state": "online", 00:16:36.027 "raid_level": "raid5f", 00:16:36.027 "superblock": true, 00:16:36.027 "num_base_bdevs": 4, 00:16:36.027 "num_base_bdevs_discovered": 3, 00:16:36.027 "num_base_bdevs_operational": 3, 00:16:36.027 "base_bdevs_list": [ 00:16:36.027 { 00:16:36.027 "name": null, 00:16:36.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.027 "is_configured": false, 00:16:36.027 "data_offset": 0, 00:16:36.027 "data_size": 63488 00:16:36.027 }, 00:16:36.027 { 00:16:36.027 "name": "BaseBdev2", 00:16:36.027 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:36.027 "is_configured": true, 00:16:36.027 "data_offset": 2048, 00:16:36.027 "data_size": 63488 00:16:36.027 }, 00:16:36.027 { 00:16:36.027 "name": "BaseBdev3", 00:16:36.027 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:36.027 "is_configured": true, 00:16:36.027 "data_offset": 2048, 00:16:36.027 "data_size": 63488 00:16:36.027 }, 00:16:36.027 { 00:16:36.027 "name": "BaseBdev4", 00:16:36.027 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:36.027 "is_configured": true, 00:16:36.027 "data_offset": 2048, 00:16:36.027 "data_size": 63488 00:16:36.027 } 00:16:36.027 ] 00:16:36.027 }' 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.027 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.596 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.596 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.596 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.596 [2024-10-25 17:57:54.751794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.596 [2024-10-25 17:57:54.768317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:36.596 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.596 17:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:36.596 [2024-10-25 17:57:54.778609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.534 "name": "raid_bdev1", 00:16:37.534 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:37.534 "strip_size_kb": 64, 00:16:37.534 "state": "online", 00:16:37.534 "raid_level": "raid5f", 00:16:37.534 "superblock": true, 00:16:37.534 "num_base_bdevs": 4, 00:16:37.534 "num_base_bdevs_discovered": 4, 00:16:37.534 "num_base_bdevs_operational": 4, 00:16:37.534 "process": { 00:16:37.534 "type": "rebuild", 00:16:37.534 "target": "spare", 00:16:37.534 "progress": { 00:16:37.534 "blocks": 19200, 00:16:37.534 "percent": 10 00:16:37.534 } 00:16:37.534 }, 00:16:37.534 "base_bdevs_list": [ 00:16:37.534 { 00:16:37.534 "name": "spare", 00:16:37.534 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:37.534 "is_configured": true, 00:16:37.534 "data_offset": 2048, 00:16:37.534 "data_size": 63488 00:16:37.534 }, 00:16:37.534 { 00:16:37.534 "name": "BaseBdev2", 00:16:37.534 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:37.534 "is_configured": true, 00:16:37.534 "data_offset": 2048, 00:16:37.534 "data_size": 63488 00:16:37.534 }, 00:16:37.534 { 00:16:37.534 "name": "BaseBdev3", 00:16:37.534 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:37.534 "is_configured": true, 00:16:37.534 "data_offset": 2048, 00:16:37.534 "data_size": 63488 00:16:37.534 }, 00:16:37.534 { 00:16:37.534 "name": "BaseBdev4", 00:16:37.534 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:37.534 "is_configured": true, 00:16:37.534 "data_offset": 2048, 00:16:37.534 "data_size": 63488 00:16:37.534 } 00:16:37.534 ] 00:16:37.534 }' 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.534 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.535 17:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.535 [2024-10-25 17:57:55.941551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.795 [2024-10-25 17:57:55.986431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.795 [2024-10-25 17:57:55.986515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.795 [2024-10-25 17:57:55.986532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.795 [2024-10-25 17:57:55.986542] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.795 "name": "raid_bdev1", 00:16:37.795 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:37.795 "strip_size_kb": 64, 00:16:37.795 "state": "online", 00:16:37.795 "raid_level": "raid5f", 00:16:37.795 "superblock": true, 00:16:37.795 "num_base_bdevs": 4, 00:16:37.795 "num_base_bdevs_discovered": 3, 00:16:37.795 "num_base_bdevs_operational": 3, 00:16:37.795 "base_bdevs_list": [ 00:16:37.795 { 00:16:37.795 "name": null, 00:16:37.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.795 "is_configured": false, 00:16:37.795 "data_offset": 0, 00:16:37.795 "data_size": 63488 00:16:37.795 }, 00:16:37.795 { 00:16:37.795 "name": "BaseBdev2", 00:16:37.795 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:37.795 "is_configured": true, 00:16:37.795 "data_offset": 2048, 00:16:37.795 "data_size": 63488 00:16:37.795 }, 00:16:37.795 { 00:16:37.795 "name": "BaseBdev3", 00:16:37.795 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:37.795 "is_configured": true, 00:16:37.795 "data_offset": 2048, 00:16:37.795 "data_size": 63488 00:16:37.795 }, 00:16:37.795 { 00:16:37.795 "name": "BaseBdev4", 00:16:37.795 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:37.795 "is_configured": true, 00:16:37.795 "data_offset": 2048, 00:16:37.795 "data_size": 63488 00:16:37.795 } 00:16:37.795 ] 00:16:37.795 }' 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.795 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.056 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.316 "name": "raid_bdev1", 00:16:38.316 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:38.316 "strip_size_kb": 64, 00:16:38.316 "state": "online", 00:16:38.316 "raid_level": "raid5f", 00:16:38.316 "superblock": true, 00:16:38.316 "num_base_bdevs": 4, 00:16:38.316 "num_base_bdevs_discovered": 3, 00:16:38.316 "num_base_bdevs_operational": 3, 00:16:38.316 "base_bdevs_list": [ 00:16:38.316 { 00:16:38.316 "name": null, 00:16:38.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.316 "is_configured": false, 00:16:38.316 "data_offset": 0, 00:16:38.316 "data_size": 63488 00:16:38.316 }, 00:16:38.316 { 00:16:38.316 "name": "BaseBdev2", 00:16:38.316 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:38.316 "is_configured": true, 00:16:38.316 "data_offset": 2048, 00:16:38.316 "data_size": 63488 00:16:38.316 }, 00:16:38.316 { 00:16:38.316 "name": "BaseBdev3", 00:16:38.316 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:38.316 "is_configured": true, 00:16:38.316 "data_offset": 2048, 00:16:38.316 "data_size": 63488 00:16:38.316 }, 00:16:38.316 { 00:16:38.316 "name": "BaseBdev4", 00:16:38.316 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:38.316 "is_configured": true, 00:16:38.316 "data_offset": 2048, 00:16:38.316 "data_size": 63488 00:16:38.316 } 00:16:38.316 ] 00:16:38.316 }' 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.316 [2024-10-25 17:57:56.622528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.316 [2024-10-25 17:57:56.640192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.316 17:57:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:38.316 [2024-10-25 17:57:56.650936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.256 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.516 "name": "raid_bdev1", 00:16:39.516 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:39.516 "strip_size_kb": 64, 00:16:39.516 "state": "online", 00:16:39.516 "raid_level": "raid5f", 00:16:39.516 "superblock": true, 00:16:39.516 "num_base_bdevs": 4, 00:16:39.516 "num_base_bdevs_discovered": 4, 00:16:39.516 "num_base_bdevs_operational": 4, 00:16:39.516 "process": { 00:16:39.516 "type": "rebuild", 00:16:39.516 "target": "spare", 00:16:39.516 "progress": { 00:16:39.516 "blocks": 17280, 00:16:39.516 "percent": 9 00:16:39.516 } 00:16:39.516 }, 00:16:39.516 "base_bdevs_list": [ 00:16:39.516 { 00:16:39.516 "name": "spare", 00:16:39.516 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:39.516 "is_configured": true, 00:16:39.516 "data_offset": 2048, 00:16:39.516 "data_size": 63488 00:16:39.516 }, 00:16:39.516 { 00:16:39.516 "name": "BaseBdev2", 00:16:39.516 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:39.516 "is_configured": true, 00:16:39.516 "data_offset": 2048, 00:16:39.516 "data_size": 63488 00:16:39.516 }, 00:16:39.516 { 00:16:39.516 "name": "BaseBdev3", 00:16:39.516 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:39.516 "is_configured": true, 00:16:39.516 "data_offset": 2048, 00:16:39.516 "data_size": 63488 00:16:39.516 }, 00:16:39.516 { 00:16:39.516 "name": "BaseBdev4", 00:16:39.516 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:39.516 "is_configured": true, 00:16:39.516 "data_offset": 2048, 00:16:39.516 "data_size": 63488 00:16:39.516 } 00:16:39.516 ] 00:16:39.516 }' 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:39.516 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=642 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.516 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.517 "name": "raid_bdev1", 00:16:39.517 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:39.517 "strip_size_kb": 64, 00:16:39.517 "state": "online", 00:16:39.517 "raid_level": "raid5f", 00:16:39.517 "superblock": true, 00:16:39.517 "num_base_bdevs": 4, 00:16:39.517 "num_base_bdevs_discovered": 4, 00:16:39.517 "num_base_bdevs_operational": 4, 00:16:39.517 "process": { 00:16:39.517 "type": "rebuild", 00:16:39.517 "target": "spare", 00:16:39.517 "progress": { 00:16:39.517 "blocks": 21120, 00:16:39.517 "percent": 11 00:16:39.517 } 00:16:39.517 }, 00:16:39.517 "base_bdevs_list": [ 00:16:39.517 { 00:16:39.517 "name": "spare", 00:16:39.517 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:39.517 "is_configured": true, 00:16:39.517 "data_offset": 2048, 00:16:39.517 "data_size": 63488 00:16:39.517 }, 00:16:39.517 { 00:16:39.517 "name": "BaseBdev2", 00:16:39.517 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:39.517 "is_configured": true, 00:16:39.517 "data_offset": 2048, 00:16:39.517 "data_size": 63488 00:16:39.517 }, 00:16:39.517 { 00:16:39.517 "name": "BaseBdev3", 00:16:39.517 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:39.517 "is_configured": true, 00:16:39.517 "data_offset": 2048, 00:16:39.517 "data_size": 63488 00:16:39.517 }, 00:16:39.517 { 00:16:39.517 "name": "BaseBdev4", 00:16:39.517 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:39.517 "is_configured": true, 00:16:39.517 "data_offset": 2048, 00:16:39.517 "data_size": 63488 00:16:39.517 } 00:16:39.517 ] 00:16:39.517 }' 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.517 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.776 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.776 17:57:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.716 17:57:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.716 "name": "raid_bdev1", 00:16:40.716 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:40.716 "strip_size_kb": 64, 00:16:40.716 "state": "online", 00:16:40.716 "raid_level": "raid5f", 00:16:40.716 "superblock": true, 00:16:40.716 "num_base_bdevs": 4, 00:16:40.716 "num_base_bdevs_discovered": 4, 00:16:40.716 "num_base_bdevs_operational": 4, 00:16:40.716 "process": { 00:16:40.716 "type": "rebuild", 00:16:40.716 "target": "spare", 00:16:40.716 "progress": { 00:16:40.716 "blocks": 44160, 00:16:40.716 "percent": 23 00:16:40.716 } 00:16:40.716 }, 00:16:40.716 "base_bdevs_list": [ 00:16:40.716 { 00:16:40.716 "name": "spare", 00:16:40.716 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 2048, 00:16:40.716 "data_size": 63488 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": "BaseBdev2", 00:16:40.716 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 2048, 00:16:40.716 "data_size": 63488 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": "BaseBdev3", 00:16:40.716 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 2048, 00:16:40.716 "data_size": 63488 00:16:40.716 }, 00:16:40.716 { 00:16:40.716 "name": "BaseBdev4", 00:16:40.716 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:40.716 "is_configured": true, 00:16:40.716 "data_offset": 2048, 00:16:40.716 "data_size": 63488 00:16:40.716 } 00:16:40.716 ] 00:16:40.716 }' 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.716 17:57:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.099 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.100 "name": "raid_bdev1", 00:16:42.100 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:42.100 "strip_size_kb": 64, 00:16:42.100 "state": "online", 00:16:42.100 "raid_level": "raid5f", 00:16:42.100 "superblock": true, 00:16:42.100 "num_base_bdevs": 4, 00:16:42.100 "num_base_bdevs_discovered": 4, 00:16:42.100 "num_base_bdevs_operational": 4, 00:16:42.100 "process": { 00:16:42.100 "type": "rebuild", 00:16:42.100 "target": "spare", 00:16:42.100 "progress": { 00:16:42.100 "blocks": 65280, 00:16:42.100 "percent": 34 00:16:42.100 } 00:16:42.100 }, 00:16:42.100 "base_bdevs_list": [ 00:16:42.100 { 00:16:42.100 "name": "spare", 00:16:42.100 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:42.100 "is_configured": true, 00:16:42.100 "data_offset": 2048, 00:16:42.100 "data_size": 63488 00:16:42.100 }, 00:16:42.100 { 00:16:42.100 "name": "BaseBdev2", 00:16:42.100 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:42.100 "is_configured": true, 00:16:42.100 "data_offset": 2048, 00:16:42.100 "data_size": 63488 00:16:42.100 }, 00:16:42.100 { 00:16:42.100 "name": "BaseBdev3", 00:16:42.100 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:42.100 "is_configured": true, 00:16:42.100 "data_offset": 2048, 00:16:42.100 "data_size": 63488 00:16:42.100 }, 00:16:42.100 { 00:16:42.100 "name": "BaseBdev4", 00:16:42.100 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:42.100 "is_configured": true, 00:16:42.100 "data_offset": 2048, 00:16:42.100 "data_size": 63488 00:16:42.100 } 00:16:42.100 ] 00:16:42.100 }' 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.100 17:58:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.039 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.039 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.039 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.040 "name": "raid_bdev1", 00:16:43.040 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:43.040 "strip_size_kb": 64, 00:16:43.040 "state": "online", 00:16:43.040 "raid_level": "raid5f", 00:16:43.040 "superblock": true, 00:16:43.040 "num_base_bdevs": 4, 00:16:43.040 "num_base_bdevs_discovered": 4, 00:16:43.040 "num_base_bdevs_operational": 4, 00:16:43.040 "process": { 00:16:43.040 "type": "rebuild", 00:16:43.040 "target": "spare", 00:16:43.040 "progress": { 00:16:43.040 "blocks": 86400, 00:16:43.040 "percent": 45 00:16:43.040 } 00:16:43.040 }, 00:16:43.040 "base_bdevs_list": [ 00:16:43.040 { 00:16:43.040 "name": "spare", 00:16:43.040 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:43.040 "is_configured": true, 00:16:43.040 "data_offset": 2048, 00:16:43.040 "data_size": 63488 00:16:43.040 }, 00:16:43.040 { 00:16:43.040 "name": "BaseBdev2", 00:16:43.040 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:43.040 "is_configured": true, 00:16:43.040 "data_offset": 2048, 00:16:43.040 "data_size": 63488 00:16:43.040 }, 00:16:43.040 { 00:16:43.040 "name": "BaseBdev3", 00:16:43.040 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:43.040 "is_configured": true, 00:16:43.040 "data_offset": 2048, 00:16:43.040 "data_size": 63488 00:16:43.040 }, 00:16:43.040 { 00:16:43.040 "name": "BaseBdev4", 00:16:43.040 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:43.040 "is_configured": true, 00:16:43.040 "data_offset": 2048, 00:16:43.040 "data_size": 63488 00:16:43.040 } 00:16:43.040 ] 00:16:43.040 }' 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.040 17:58:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.981 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.241 "name": "raid_bdev1", 00:16:44.241 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:44.241 "strip_size_kb": 64, 00:16:44.241 "state": "online", 00:16:44.241 "raid_level": "raid5f", 00:16:44.241 "superblock": true, 00:16:44.241 "num_base_bdevs": 4, 00:16:44.241 "num_base_bdevs_discovered": 4, 00:16:44.241 "num_base_bdevs_operational": 4, 00:16:44.241 "process": { 00:16:44.241 "type": "rebuild", 00:16:44.241 "target": "spare", 00:16:44.241 "progress": { 00:16:44.241 "blocks": 109440, 00:16:44.241 "percent": 57 00:16:44.241 } 00:16:44.241 }, 00:16:44.241 "base_bdevs_list": [ 00:16:44.241 { 00:16:44.241 "name": "spare", 00:16:44.241 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:44.241 "is_configured": true, 00:16:44.241 "data_offset": 2048, 00:16:44.241 "data_size": 63488 00:16:44.241 }, 00:16:44.241 { 00:16:44.241 "name": "BaseBdev2", 00:16:44.241 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:44.241 "is_configured": true, 00:16:44.241 "data_offset": 2048, 00:16:44.241 "data_size": 63488 00:16:44.241 }, 00:16:44.241 { 00:16:44.241 "name": "BaseBdev3", 00:16:44.241 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:44.241 "is_configured": true, 00:16:44.241 "data_offset": 2048, 00:16:44.241 "data_size": 63488 00:16:44.241 }, 00:16:44.241 { 00:16:44.241 "name": "BaseBdev4", 00:16:44.241 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:44.241 "is_configured": true, 00:16:44.241 "data_offset": 2048, 00:16:44.241 "data_size": 63488 00:16:44.241 } 00:16:44.241 ] 00:16:44.241 }' 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.241 17:58:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.182 "name": "raid_bdev1", 00:16:45.182 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:45.182 "strip_size_kb": 64, 00:16:45.182 "state": "online", 00:16:45.182 "raid_level": "raid5f", 00:16:45.182 "superblock": true, 00:16:45.182 "num_base_bdevs": 4, 00:16:45.182 "num_base_bdevs_discovered": 4, 00:16:45.182 "num_base_bdevs_operational": 4, 00:16:45.182 "process": { 00:16:45.182 "type": "rebuild", 00:16:45.182 "target": "spare", 00:16:45.182 "progress": { 00:16:45.182 "blocks": 130560, 00:16:45.182 "percent": 68 00:16:45.182 } 00:16:45.182 }, 00:16:45.182 "base_bdevs_list": [ 00:16:45.182 { 00:16:45.182 "name": "spare", 00:16:45.182 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 2048, 00:16:45.182 "data_size": 63488 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev2", 00:16:45.182 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 2048, 00:16:45.182 "data_size": 63488 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev3", 00:16:45.182 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 2048, 00:16:45.182 "data_size": 63488 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev4", 00:16:45.182 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 2048, 00:16:45.182 "data_size": 63488 00:16:45.182 } 00:16:45.182 ] 00:16:45.182 }' 00:16:45.182 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.442 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.442 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.442 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.442 17:58:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.381 "name": "raid_bdev1", 00:16:46.381 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:46.381 "strip_size_kb": 64, 00:16:46.381 "state": "online", 00:16:46.381 "raid_level": "raid5f", 00:16:46.381 "superblock": true, 00:16:46.381 "num_base_bdevs": 4, 00:16:46.381 "num_base_bdevs_discovered": 4, 00:16:46.381 "num_base_bdevs_operational": 4, 00:16:46.381 "process": { 00:16:46.381 "type": "rebuild", 00:16:46.381 "target": "spare", 00:16:46.381 "progress": { 00:16:46.381 "blocks": 153600, 00:16:46.381 "percent": 80 00:16:46.381 } 00:16:46.381 }, 00:16:46.381 "base_bdevs_list": [ 00:16:46.381 { 00:16:46.381 "name": "spare", 00:16:46.381 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:46.381 "is_configured": true, 00:16:46.381 "data_offset": 2048, 00:16:46.381 "data_size": 63488 00:16:46.381 }, 00:16:46.381 { 00:16:46.381 "name": "BaseBdev2", 00:16:46.381 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:46.381 "is_configured": true, 00:16:46.381 "data_offset": 2048, 00:16:46.381 "data_size": 63488 00:16:46.381 }, 00:16:46.381 { 00:16:46.381 "name": "BaseBdev3", 00:16:46.381 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:46.381 "is_configured": true, 00:16:46.381 "data_offset": 2048, 00:16:46.381 "data_size": 63488 00:16:46.381 }, 00:16:46.381 { 00:16:46.381 "name": "BaseBdev4", 00:16:46.381 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:46.381 "is_configured": true, 00:16:46.381 "data_offset": 2048, 00:16:46.381 "data_size": 63488 00:16:46.381 } 00:16:46.381 ] 00:16:46.381 }' 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.381 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.641 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.642 17:58:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.581 "name": "raid_bdev1", 00:16:47.581 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:47.581 "strip_size_kb": 64, 00:16:47.581 "state": "online", 00:16:47.581 "raid_level": "raid5f", 00:16:47.581 "superblock": true, 00:16:47.581 "num_base_bdevs": 4, 00:16:47.581 "num_base_bdevs_discovered": 4, 00:16:47.581 "num_base_bdevs_operational": 4, 00:16:47.581 "process": { 00:16:47.581 "type": "rebuild", 00:16:47.581 "target": "spare", 00:16:47.581 "progress": { 00:16:47.581 "blocks": 174720, 00:16:47.581 "percent": 91 00:16:47.581 } 00:16:47.581 }, 00:16:47.581 "base_bdevs_list": [ 00:16:47.581 { 00:16:47.581 "name": "spare", 00:16:47.581 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:47.581 "is_configured": true, 00:16:47.581 "data_offset": 2048, 00:16:47.581 "data_size": 63488 00:16:47.581 }, 00:16:47.581 { 00:16:47.581 "name": "BaseBdev2", 00:16:47.581 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:47.581 "is_configured": true, 00:16:47.581 "data_offset": 2048, 00:16:47.581 "data_size": 63488 00:16:47.581 }, 00:16:47.581 { 00:16:47.581 "name": "BaseBdev3", 00:16:47.581 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:47.581 "is_configured": true, 00:16:47.581 "data_offset": 2048, 00:16:47.581 "data_size": 63488 00:16:47.581 }, 00:16:47.581 { 00:16:47.581 "name": "BaseBdev4", 00:16:47.581 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:47.581 "is_configured": true, 00:16:47.581 "data_offset": 2048, 00:16:47.581 "data_size": 63488 00:16:47.581 } 00:16:47.581 ] 00:16:47.581 }' 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.581 17:58:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.581 17:58:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.581 17:58:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.520 [2024-10-25 17:58:06.719553] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:48.520 [2024-10-25 17:58:06.719692] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:48.520 [2024-10-25 17:58:06.719909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.781 "name": "raid_bdev1", 00:16:48.781 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:48.781 "strip_size_kb": 64, 00:16:48.781 "state": "online", 00:16:48.781 "raid_level": "raid5f", 00:16:48.781 "superblock": true, 00:16:48.781 "num_base_bdevs": 4, 00:16:48.781 "num_base_bdevs_discovered": 4, 00:16:48.781 "num_base_bdevs_operational": 4, 00:16:48.781 "base_bdevs_list": [ 00:16:48.781 { 00:16:48.781 "name": "spare", 00:16:48.781 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:48.781 "is_configured": true, 00:16:48.781 "data_offset": 2048, 00:16:48.781 "data_size": 63488 00:16:48.781 }, 00:16:48.781 { 00:16:48.781 "name": "BaseBdev2", 00:16:48.781 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:48.781 "is_configured": true, 00:16:48.781 "data_offset": 2048, 00:16:48.781 "data_size": 63488 00:16:48.781 }, 00:16:48.781 { 00:16:48.781 "name": "BaseBdev3", 00:16:48.781 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:48.781 "is_configured": true, 00:16:48.781 "data_offset": 2048, 00:16:48.781 "data_size": 63488 00:16:48.781 }, 00:16:48.781 { 00:16:48.781 "name": "BaseBdev4", 00:16:48.781 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:48.781 "is_configured": true, 00:16:48.781 "data_offset": 2048, 00:16:48.781 "data_size": 63488 00:16:48.781 } 00:16:48.781 ] 00:16:48.781 }' 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.781 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.782 "name": "raid_bdev1", 00:16:48.782 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:48.782 "strip_size_kb": 64, 00:16:48.782 "state": "online", 00:16:48.782 "raid_level": "raid5f", 00:16:48.782 "superblock": true, 00:16:48.782 "num_base_bdevs": 4, 00:16:48.782 "num_base_bdevs_discovered": 4, 00:16:48.782 "num_base_bdevs_operational": 4, 00:16:48.782 "base_bdevs_list": [ 00:16:48.782 { 00:16:48.782 "name": "spare", 00:16:48.782 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:48.782 "is_configured": true, 00:16:48.782 "data_offset": 2048, 00:16:48.782 "data_size": 63488 00:16:48.782 }, 00:16:48.782 { 00:16:48.782 "name": "BaseBdev2", 00:16:48.782 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:48.782 "is_configured": true, 00:16:48.782 "data_offset": 2048, 00:16:48.782 "data_size": 63488 00:16:48.782 }, 00:16:48.782 { 00:16:48.782 "name": "BaseBdev3", 00:16:48.782 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:48.782 "is_configured": true, 00:16:48.782 "data_offset": 2048, 00:16:48.782 "data_size": 63488 00:16:48.782 }, 00:16:48.782 { 00:16:48.782 "name": "BaseBdev4", 00:16:48.782 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:48.782 "is_configured": true, 00:16:48.782 "data_offset": 2048, 00:16:48.782 "data_size": 63488 00:16:48.782 } 00:16:48.782 ] 00:16:48.782 }' 00:16:48.782 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.042 "name": "raid_bdev1", 00:16:49.042 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:49.042 "strip_size_kb": 64, 00:16:49.042 "state": "online", 00:16:49.042 "raid_level": "raid5f", 00:16:49.042 "superblock": true, 00:16:49.042 "num_base_bdevs": 4, 00:16:49.042 "num_base_bdevs_discovered": 4, 00:16:49.042 "num_base_bdevs_operational": 4, 00:16:49.042 "base_bdevs_list": [ 00:16:49.042 { 00:16:49.042 "name": "spare", 00:16:49.042 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:49.042 "is_configured": true, 00:16:49.042 "data_offset": 2048, 00:16:49.042 "data_size": 63488 00:16:49.042 }, 00:16:49.042 { 00:16:49.042 "name": "BaseBdev2", 00:16:49.042 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:49.042 "is_configured": true, 00:16:49.042 "data_offset": 2048, 00:16:49.042 "data_size": 63488 00:16:49.042 }, 00:16:49.042 { 00:16:49.042 "name": "BaseBdev3", 00:16:49.042 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:49.042 "is_configured": true, 00:16:49.042 "data_offset": 2048, 00:16:49.042 "data_size": 63488 00:16:49.042 }, 00:16:49.042 { 00:16:49.042 "name": "BaseBdev4", 00:16:49.042 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:49.042 "is_configured": true, 00:16:49.042 "data_offset": 2048, 00:16:49.042 "data_size": 63488 00:16:49.042 } 00:16:49.042 ] 00:16:49.042 }' 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.042 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.613 [2024-10-25 17:58:07.796229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.613 [2024-10-25 17:58:07.796324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.613 [2024-10-25 17:58:07.796436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.613 [2024-10-25 17:58:07.796572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.613 [2024-10-25 17:58:07.796611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.613 17:58:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:49.874 /dev/nbd0 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.874 1+0 records in 00:16:49.874 1+0 records out 00:16:49.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439262 s, 9.3 MB/s 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.874 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:50.134 /dev/nbd1 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.134 1+0 records in 00:16:50.134 1+0 records out 00:16:50.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584115 s, 7.0 MB/s 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:50.134 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.395 17:58:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.655 [2024-10-25 17:58:09.081466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.655 [2024-10-25 17:58:09.081572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.655 [2024-10-25 17:58:09.081619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:50.655 [2024-10-25 17:58:09.081648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.655 [2024-10-25 17:58:09.083987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.655 [2024-10-25 17:58:09.084058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.655 [2024-10-25 17:58:09.084174] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:50.655 [2024-10-25 17:58:09.084245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.655 [2024-10-25 17:58:09.084432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.655 [2024-10-25 17:58:09.084547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.655 [2024-10-25 17:58:09.084643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.655 spare 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.655 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.956 [2024-10-25 17:58:09.184554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:50.956 [2024-10-25 17:58:09.184594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.956 [2024-10-25 17:58:09.184918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:50.956 [2024-10-25 17:58:09.192160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:50.956 [2024-10-25 17:58:09.192224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:50.956 [2024-10-25 17:58:09.192443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.956 "name": "raid_bdev1", 00:16:50.956 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:50.956 "strip_size_kb": 64, 00:16:50.956 "state": "online", 00:16:50.956 "raid_level": "raid5f", 00:16:50.956 "superblock": true, 00:16:50.956 "num_base_bdevs": 4, 00:16:50.956 "num_base_bdevs_discovered": 4, 00:16:50.956 "num_base_bdevs_operational": 4, 00:16:50.956 "base_bdevs_list": [ 00:16:50.956 { 00:16:50.956 "name": "spare", 00:16:50.956 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "BaseBdev2", 00:16:50.956 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "BaseBdev3", 00:16:50.956 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "BaseBdev4", 00:16:50.956 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 } 00:16:50.956 ] 00:16:50.956 }' 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.956 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.237 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.496 "name": "raid_bdev1", 00:16:51.496 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:51.496 "strip_size_kb": 64, 00:16:51.496 "state": "online", 00:16:51.496 "raid_level": "raid5f", 00:16:51.496 "superblock": true, 00:16:51.496 "num_base_bdevs": 4, 00:16:51.496 "num_base_bdevs_discovered": 4, 00:16:51.496 "num_base_bdevs_operational": 4, 00:16:51.496 "base_bdevs_list": [ 00:16:51.496 { 00:16:51.496 "name": "spare", 00:16:51.496 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:51.496 "is_configured": true, 00:16:51.496 "data_offset": 2048, 00:16:51.496 "data_size": 63488 00:16:51.496 }, 00:16:51.496 { 00:16:51.496 "name": "BaseBdev2", 00:16:51.496 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:51.496 "is_configured": true, 00:16:51.496 "data_offset": 2048, 00:16:51.496 "data_size": 63488 00:16:51.496 }, 00:16:51.496 { 00:16:51.496 "name": "BaseBdev3", 00:16:51.496 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:51.496 "is_configured": true, 00:16:51.496 "data_offset": 2048, 00:16:51.496 "data_size": 63488 00:16:51.496 }, 00:16:51.496 { 00:16:51.496 "name": "BaseBdev4", 00:16:51.496 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:51.496 "is_configured": true, 00:16:51.496 "data_offset": 2048, 00:16:51.496 "data_size": 63488 00:16:51.496 } 00:16:51.496 ] 00:16:51.496 }' 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.496 [2024-10-25 17:58:09.856182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.496 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.497 "name": "raid_bdev1", 00:16:51.497 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:51.497 "strip_size_kb": 64, 00:16:51.497 "state": "online", 00:16:51.497 "raid_level": "raid5f", 00:16:51.497 "superblock": true, 00:16:51.497 "num_base_bdevs": 4, 00:16:51.497 "num_base_bdevs_discovered": 3, 00:16:51.497 "num_base_bdevs_operational": 3, 00:16:51.497 "base_bdevs_list": [ 00:16:51.497 { 00:16:51.497 "name": null, 00:16:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.497 "is_configured": false, 00:16:51.497 "data_offset": 0, 00:16:51.497 "data_size": 63488 00:16:51.497 }, 00:16:51.497 { 00:16:51.497 "name": "BaseBdev2", 00:16:51.497 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:51.497 "is_configured": true, 00:16:51.497 "data_offset": 2048, 00:16:51.497 "data_size": 63488 00:16:51.497 }, 00:16:51.497 { 00:16:51.497 "name": "BaseBdev3", 00:16:51.497 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:51.497 "is_configured": true, 00:16:51.497 "data_offset": 2048, 00:16:51.497 "data_size": 63488 00:16:51.497 }, 00:16:51.497 { 00:16:51.497 "name": "BaseBdev4", 00:16:51.497 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:51.497 "is_configured": true, 00:16:51.497 "data_offset": 2048, 00:16:51.497 "data_size": 63488 00:16:51.497 } 00:16:51.497 ] 00:16:51.497 }' 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.497 17:58:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.067 17:58:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.067 17:58:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.067 17:58:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.067 [2024-10-25 17:58:10.331449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.067 [2024-10-25 17:58:10.331740] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:52.067 [2024-10-25 17:58:10.331809] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:52.067 [2024-10-25 17:58:10.331933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.067 [2024-10-25 17:58:10.347284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:52.067 17:58:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.067 17:58:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:52.067 [2024-10-25 17:58:10.358441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.004 "name": "raid_bdev1", 00:16:53.004 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:53.004 "strip_size_kb": 64, 00:16:53.004 "state": "online", 00:16:53.004 "raid_level": "raid5f", 00:16:53.004 "superblock": true, 00:16:53.004 "num_base_bdevs": 4, 00:16:53.004 "num_base_bdevs_discovered": 4, 00:16:53.004 "num_base_bdevs_operational": 4, 00:16:53.004 "process": { 00:16:53.004 "type": "rebuild", 00:16:53.004 "target": "spare", 00:16:53.004 "progress": { 00:16:53.004 "blocks": 19200, 00:16:53.004 "percent": 10 00:16:53.004 } 00:16:53.004 }, 00:16:53.004 "base_bdevs_list": [ 00:16:53.004 { 00:16:53.004 "name": "spare", 00:16:53.004 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:53.004 "is_configured": true, 00:16:53.004 "data_offset": 2048, 00:16:53.004 "data_size": 63488 00:16:53.004 }, 00:16:53.004 { 00:16:53.004 "name": "BaseBdev2", 00:16:53.004 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:53.004 "is_configured": true, 00:16:53.004 "data_offset": 2048, 00:16:53.004 "data_size": 63488 00:16:53.004 }, 00:16:53.004 { 00:16:53.004 "name": "BaseBdev3", 00:16:53.004 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:53.004 "is_configured": true, 00:16:53.004 "data_offset": 2048, 00:16:53.004 "data_size": 63488 00:16:53.004 }, 00:16:53.004 { 00:16:53.004 "name": "BaseBdev4", 00:16:53.004 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:53.004 "is_configured": true, 00:16:53.004 "data_offset": 2048, 00:16:53.004 "data_size": 63488 00:16:53.004 } 00:16:53.004 ] 00:16:53.004 }' 00:16:53.004 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.263 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.264 [2024-10-25 17:58:11.505898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.264 [2024-10-25 17:58:11.565928] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:53.264 [2024-10-25 17:58:11.566075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.264 [2024-10-25 17:58:11.566138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.264 [2024-10-25 17:58:11.566168] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.264 "name": "raid_bdev1", 00:16:53.264 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:53.264 "strip_size_kb": 64, 00:16:53.264 "state": "online", 00:16:53.264 "raid_level": "raid5f", 00:16:53.264 "superblock": true, 00:16:53.264 "num_base_bdevs": 4, 00:16:53.264 "num_base_bdevs_discovered": 3, 00:16:53.264 "num_base_bdevs_operational": 3, 00:16:53.264 "base_bdevs_list": [ 00:16:53.264 { 00:16:53.264 "name": null, 00:16:53.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.264 "is_configured": false, 00:16:53.264 "data_offset": 0, 00:16:53.264 "data_size": 63488 00:16:53.264 }, 00:16:53.264 { 00:16:53.264 "name": "BaseBdev2", 00:16:53.264 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:53.264 "is_configured": true, 00:16:53.264 "data_offset": 2048, 00:16:53.264 "data_size": 63488 00:16:53.264 }, 00:16:53.264 { 00:16:53.264 "name": "BaseBdev3", 00:16:53.264 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:53.264 "is_configured": true, 00:16:53.264 "data_offset": 2048, 00:16:53.264 "data_size": 63488 00:16:53.264 }, 00:16:53.264 { 00:16:53.264 "name": "BaseBdev4", 00:16:53.264 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:53.264 "is_configured": true, 00:16:53.264 "data_offset": 2048, 00:16:53.264 "data_size": 63488 00:16:53.264 } 00:16:53.264 ] 00:16:53.264 }' 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.264 17:58:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.832 17:58:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.832 17:58:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.832 17:58:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.832 [2024-10-25 17:58:12.067402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.832 [2024-10-25 17:58:12.067579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.832 [2024-10-25 17:58:12.067652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:53.832 [2024-10-25 17:58:12.067702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.832 [2024-10-25 17:58:12.068448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.832 [2024-10-25 17:58:12.068578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.832 [2024-10-25 17:58:12.068770] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:53.832 [2024-10-25 17:58:12.068854] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.832 [2024-10-25 17:58:12.068932] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:53.832 [2024-10-25 17:58:12.069037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.832 [2024-10-25 17:58:12.083593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:53.832 spare 00:16:53.832 17:58:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.832 17:58:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:53.832 [2024-10-25 17:58:12.093096] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.771 "name": "raid_bdev1", 00:16:54.771 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:54.771 "strip_size_kb": 64, 00:16:54.771 "state": "online", 00:16:54.771 "raid_level": "raid5f", 00:16:54.771 "superblock": true, 00:16:54.771 "num_base_bdevs": 4, 00:16:54.771 "num_base_bdevs_discovered": 4, 00:16:54.771 "num_base_bdevs_operational": 4, 00:16:54.771 "process": { 00:16:54.771 "type": "rebuild", 00:16:54.771 "target": "spare", 00:16:54.771 "progress": { 00:16:54.771 "blocks": 19200, 00:16:54.771 "percent": 10 00:16:54.771 } 00:16:54.771 }, 00:16:54.771 "base_bdevs_list": [ 00:16:54.771 { 00:16:54.771 "name": "spare", 00:16:54.771 "uuid": "6eb16be8-3a51-533f-b509-7326bf250974", 00:16:54.771 "is_configured": true, 00:16:54.771 "data_offset": 2048, 00:16:54.771 "data_size": 63488 00:16:54.771 }, 00:16:54.771 { 00:16:54.771 "name": "BaseBdev2", 00:16:54.771 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:54.771 "is_configured": true, 00:16:54.771 "data_offset": 2048, 00:16:54.771 "data_size": 63488 00:16:54.771 }, 00:16:54.771 { 00:16:54.771 "name": "BaseBdev3", 00:16:54.771 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:54.771 "is_configured": true, 00:16:54.771 "data_offset": 2048, 00:16:54.771 "data_size": 63488 00:16:54.771 }, 00:16:54.771 { 00:16:54.771 "name": "BaseBdev4", 00:16:54.771 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:54.771 "is_configured": true, 00:16:54.771 "data_offset": 2048, 00:16:54.771 "data_size": 63488 00:16:54.771 } 00:16:54.771 ] 00:16:54.771 }' 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.771 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.031 [2024-10-25 17:58:13.224705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.031 [2024-10-25 17:58:13.301422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.031 [2024-10-25 17:58:13.301498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.031 [2024-10-25 17:58:13.301520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.031 [2024-10-25 17:58:13.301528] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.031 "name": "raid_bdev1", 00:16:55.031 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:55.031 "strip_size_kb": 64, 00:16:55.031 "state": "online", 00:16:55.031 "raid_level": "raid5f", 00:16:55.031 "superblock": true, 00:16:55.031 "num_base_bdevs": 4, 00:16:55.031 "num_base_bdevs_discovered": 3, 00:16:55.031 "num_base_bdevs_operational": 3, 00:16:55.031 "base_bdevs_list": [ 00:16:55.031 { 00:16:55.031 "name": null, 00:16:55.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.031 "is_configured": false, 00:16:55.031 "data_offset": 0, 00:16:55.031 "data_size": 63488 00:16:55.031 }, 00:16:55.031 { 00:16:55.031 "name": "BaseBdev2", 00:16:55.031 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:55.031 "is_configured": true, 00:16:55.031 "data_offset": 2048, 00:16:55.031 "data_size": 63488 00:16:55.031 }, 00:16:55.031 { 00:16:55.031 "name": "BaseBdev3", 00:16:55.031 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:55.031 "is_configured": true, 00:16:55.031 "data_offset": 2048, 00:16:55.031 "data_size": 63488 00:16:55.031 }, 00:16:55.031 { 00:16:55.031 "name": "BaseBdev4", 00:16:55.031 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:55.031 "is_configured": true, 00:16:55.031 "data_offset": 2048, 00:16:55.031 "data_size": 63488 00:16:55.031 } 00:16:55.031 ] 00:16:55.031 }' 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.031 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.600 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.600 "name": "raid_bdev1", 00:16:55.600 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:55.600 "strip_size_kb": 64, 00:16:55.600 "state": "online", 00:16:55.600 "raid_level": "raid5f", 00:16:55.600 "superblock": true, 00:16:55.600 "num_base_bdevs": 4, 00:16:55.600 "num_base_bdevs_discovered": 3, 00:16:55.600 "num_base_bdevs_operational": 3, 00:16:55.600 "base_bdevs_list": [ 00:16:55.600 { 00:16:55.600 "name": null, 00:16:55.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.600 "is_configured": false, 00:16:55.600 "data_offset": 0, 00:16:55.601 "data_size": 63488 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": "BaseBdev2", 00:16:55.601 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 2048, 00:16:55.601 "data_size": 63488 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": "BaseBdev3", 00:16:55.601 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 2048, 00:16:55.601 "data_size": 63488 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": "BaseBdev4", 00:16:55.601 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 2048, 00:16:55.601 "data_size": 63488 00:16:55.601 } 00:16:55.601 ] 00:16:55.601 }' 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.601 [2024-10-25 17:58:13.971340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:55.601 [2024-10-25 17:58:13.971397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.601 [2024-10-25 17:58:13.971419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:55.601 [2024-10-25 17:58:13.971429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.601 [2024-10-25 17:58:13.971907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.601 [2024-10-25 17:58:13.971928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.601 [2024-10-25 17:58:13.972008] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:55.601 [2024-10-25 17:58:13.972022] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.601 [2024-10-25 17:58:13.972035] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:55.601 [2024-10-25 17:58:13.972046] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:55.601 BaseBdev1 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.601 17:58:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.985 17:58:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.985 "name": "raid_bdev1", 00:16:56.985 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:56.985 "strip_size_kb": 64, 00:16:56.985 "state": "online", 00:16:56.985 "raid_level": "raid5f", 00:16:56.985 "superblock": true, 00:16:56.985 "num_base_bdevs": 4, 00:16:56.985 "num_base_bdevs_discovered": 3, 00:16:56.985 "num_base_bdevs_operational": 3, 00:16:56.985 "base_bdevs_list": [ 00:16:56.985 { 00:16:56.985 "name": null, 00:16:56.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.985 "is_configured": false, 00:16:56.985 "data_offset": 0, 00:16:56.985 "data_size": 63488 00:16:56.985 }, 00:16:56.985 { 00:16:56.985 "name": "BaseBdev2", 00:16:56.985 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:56.985 "is_configured": true, 00:16:56.985 "data_offset": 2048, 00:16:56.985 "data_size": 63488 00:16:56.985 }, 00:16:56.985 { 00:16:56.985 "name": "BaseBdev3", 00:16:56.985 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:56.985 "is_configured": true, 00:16:56.985 "data_offset": 2048, 00:16:56.985 "data_size": 63488 00:16:56.985 }, 00:16:56.985 { 00:16:56.985 "name": "BaseBdev4", 00:16:56.985 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:56.985 "is_configured": true, 00:16:56.985 "data_offset": 2048, 00:16:56.985 "data_size": 63488 00:16:56.985 } 00:16:56.985 ] 00:16:56.985 }' 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.985 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.248 "name": "raid_bdev1", 00:16:57.248 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:57.248 "strip_size_kb": 64, 00:16:57.248 "state": "online", 00:16:57.248 "raid_level": "raid5f", 00:16:57.248 "superblock": true, 00:16:57.248 "num_base_bdevs": 4, 00:16:57.248 "num_base_bdevs_discovered": 3, 00:16:57.248 "num_base_bdevs_operational": 3, 00:16:57.248 "base_bdevs_list": [ 00:16:57.248 { 00:16:57.248 "name": null, 00:16:57.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.248 "is_configured": false, 00:16:57.248 "data_offset": 0, 00:16:57.248 "data_size": 63488 00:16:57.248 }, 00:16:57.248 { 00:16:57.248 "name": "BaseBdev2", 00:16:57.248 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:57.248 "is_configured": true, 00:16:57.248 "data_offset": 2048, 00:16:57.248 "data_size": 63488 00:16:57.248 }, 00:16:57.248 { 00:16:57.248 "name": "BaseBdev3", 00:16:57.248 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:57.248 "is_configured": true, 00:16:57.248 "data_offset": 2048, 00:16:57.248 "data_size": 63488 00:16:57.248 }, 00:16:57.248 { 00:16:57.248 "name": "BaseBdev4", 00:16:57.248 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:57.248 "is_configured": true, 00:16:57.248 "data_offset": 2048, 00:16:57.248 "data_size": 63488 00:16:57.248 } 00:16:57.248 ] 00:16:57.248 }' 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.248 [2024-10-25 17:58:15.576770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.248 [2024-10-25 17:58:15.577017] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.248 [2024-10-25 17:58:15.577041] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:57.248 request: 00:16:57.248 { 00:16:57.248 "base_bdev": "BaseBdev1", 00:16:57.248 "raid_bdev": "raid_bdev1", 00:16:57.248 "method": "bdev_raid_add_base_bdev", 00:16:57.248 "req_id": 1 00:16:57.248 } 00:16:57.248 Got JSON-RPC error response 00:16:57.248 response: 00:16:57.248 { 00:16:57.248 "code": -22, 00:16:57.248 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:57.248 } 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.248 17:58:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.446 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.446 "name": "raid_bdev1", 00:16:58.446 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:58.446 "strip_size_kb": 64, 00:16:58.446 "state": "online", 00:16:58.446 "raid_level": "raid5f", 00:16:58.446 "superblock": true, 00:16:58.446 "num_base_bdevs": 4, 00:16:58.446 "num_base_bdevs_discovered": 3, 00:16:58.446 "num_base_bdevs_operational": 3, 00:16:58.446 "base_bdevs_list": [ 00:16:58.446 { 00:16:58.446 "name": null, 00:16:58.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.446 "is_configured": false, 00:16:58.446 "data_offset": 0, 00:16:58.446 "data_size": 63488 00:16:58.446 }, 00:16:58.446 { 00:16:58.446 "name": "BaseBdev2", 00:16:58.446 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:58.446 "is_configured": true, 00:16:58.446 "data_offset": 2048, 00:16:58.446 "data_size": 63488 00:16:58.446 }, 00:16:58.446 { 00:16:58.446 "name": "BaseBdev3", 00:16:58.446 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:58.446 "is_configured": true, 00:16:58.446 "data_offset": 2048, 00:16:58.446 "data_size": 63488 00:16:58.446 }, 00:16:58.446 { 00:16:58.446 "name": "BaseBdev4", 00:16:58.446 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:58.446 "is_configured": true, 00:16:58.446 "data_offset": 2048, 00:16:58.446 "data_size": 63488 00:16:58.446 } 00:16:58.446 ] 00:16:58.446 }' 00:16:58.446 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.446 17:58:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.704 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.704 "name": "raid_bdev1", 00:16:58.704 "uuid": "46d8ed8f-1bb7-48b1-bb25-be6cc450a6ac", 00:16:58.705 "strip_size_kb": 64, 00:16:58.705 "state": "online", 00:16:58.705 "raid_level": "raid5f", 00:16:58.705 "superblock": true, 00:16:58.705 "num_base_bdevs": 4, 00:16:58.705 "num_base_bdevs_discovered": 3, 00:16:58.705 "num_base_bdevs_operational": 3, 00:16:58.705 "base_bdevs_list": [ 00:16:58.705 { 00:16:58.705 "name": null, 00:16:58.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.705 "is_configured": false, 00:16:58.705 "data_offset": 0, 00:16:58.705 "data_size": 63488 00:16:58.705 }, 00:16:58.705 { 00:16:58.705 "name": "BaseBdev2", 00:16:58.705 "uuid": "bd53b6a5-4aec-52ab-8df7-e39ff452ee28", 00:16:58.705 "is_configured": true, 00:16:58.705 "data_offset": 2048, 00:16:58.705 "data_size": 63488 00:16:58.705 }, 00:16:58.705 { 00:16:58.705 "name": "BaseBdev3", 00:16:58.705 "uuid": "b21a30bb-0b28-558b-b682-523903a15853", 00:16:58.705 "is_configured": true, 00:16:58.705 "data_offset": 2048, 00:16:58.705 "data_size": 63488 00:16:58.705 }, 00:16:58.705 { 00:16:58.705 "name": "BaseBdev4", 00:16:58.705 "uuid": "93154c62-6656-5c6a-bc20-4a48eebb0fd3", 00:16:58.705 "is_configured": true, 00:16:58.705 "data_offset": 2048, 00:16:58.705 "data_size": 63488 00:16:58.705 } 00:16:58.705 ] 00:16:58.705 }' 00:16:58.705 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85008 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85008 ']' 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85008 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85008 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.962 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85008' 00:16:58.962 killing process with pid 85008 00:16:58.963 Received shutdown signal, test time was about 60.000000 seconds 00:16:58.963 00:16:58.963 Latency(us) 00:16:58.963 [2024-10-25T17:58:17.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.963 [2024-10-25T17:58:17.399Z] =================================================================================================================== 00:16:58.963 [2024-10-25T17:58:17.399Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.963 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85008 00:16:58.963 [2024-10-25 17:58:17.260862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.963 [2024-10-25 17:58:17.261011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.963 17:58:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85008 00:16:58.963 [2024-10-25 17:58:17.261105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.963 [2024-10-25 17:58:17.261120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:59.533 [2024-10-25 17:58:17.759353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.472 17:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.472 00:17:00.472 real 0m27.429s 00:17:00.472 user 0m34.517s 00:17:00.472 sys 0m3.181s 00:17:00.472 17:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.472 17:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.472 ************************************ 00:17:00.472 END TEST raid5f_rebuild_test_sb 00:17:00.472 ************************************ 00:17:00.733 17:58:18 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:00.733 17:58:18 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:00.733 17:58:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:00.733 17:58:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.733 17:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.733 ************************************ 00:17:00.733 START TEST raid_state_function_test_sb_4k 00:17:00.733 ************************************ 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.733 Process raid pid: 85818 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85818 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85818' 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85818 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85818 ']' 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.733 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:00.734 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.734 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.734 17:58:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.734 [2024-10-25 17:58:19.077372] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:00.734 [2024-10-25 17:58:19.077587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.994 [2024-10-25 17:58:19.253489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.994 [2024-10-25 17:58:19.377132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.253 [2024-10-25 17:58:19.598931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.253 [2024-10-25 17:58:19.598971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.515 [2024-10-25 17:58:19.938858] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.515 [2024-10-25 17:58:19.938969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.515 [2024-10-25 17:58:19.938984] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.515 [2024-10-25 17:58:19.938994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.515 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.775 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.775 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.775 "name": "Existed_Raid", 00:17:01.775 "uuid": "ab403549-8923-4ff3-a303-6259f48cc6fa", 00:17:01.775 "strip_size_kb": 0, 00:17:01.775 "state": "configuring", 00:17:01.775 "raid_level": "raid1", 00:17:01.775 "superblock": true, 00:17:01.775 "num_base_bdevs": 2, 00:17:01.775 "num_base_bdevs_discovered": 0, 00:17:01.775 "num_base_bdevs_operational": 2, 00:17:01.775 "base_bdevs_list": [ 00:17:01.775 { 00:17:01.775 "name": "BaseBdev1", 00:17:01.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.775 "is_configured": false, 00:17:01.775 "data_offset": 0, 00:17:01.775 "data_size": 0 00:17:01.775 }, 00:17:01.775 { 00:17:01.775 "name": "BaseBdev2", 00:17:01.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.775 "is_configured": false, 00:17:01.775 "data_offset": 0, 00:17:01.775 "data_size": 0 00:17:01.775 } 00:17:01.775 ] 00:17:01.775 }' 00:17:01.775 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.775 17:58:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 [2024-10-25 17:58:20.394045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.036 [2024-10-25 17:58:20.394087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 [2024-10-25 17:58:20.406021] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.036 [2024-10-25 17:58:20.406122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.036 [2024-10-25 17:58:20.406137] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.036 [2024-10-25 17:58:20.406150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 [2024-10-25 17:58:20.458892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.036 BaseBdev1 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.036 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.297 [ 00:17:02.297 { 00:17:02.297 "name": "BaseBdev1", 00:17:02.297 "aliases": [ 00:17:02.297 "89222cf7-682c-4996-b18d-a7a64c0f36ae" 00:17:02.297 ], 00:17:02.297 "product_name": "Malloc disk", 00:17:02.297 "block_size": 4096, 00:17:02.297 "num_blocks": 8192, 00:17:02.297 "uuid": "89222cf7-682c-4996-b18d-a7a64c0f36ae", 00:17:02.297 "assigned_rate_limits": { 00:17:02.297 "rw_ios_per_sec": 0, 00:17:02.297 "rw_mbytes_per_sec": 0, 00:17:02.297 "r_mbytes_per_sec": 0, 00:17:02.297 "w_mbytes_per_sec": 0 00:17:02.297 }, 00:17:02.297 "claimed": true, 00:17:02.297 "claim_type": "exclusive_write", 00:17:02.297 "zoned": false, 00:17:02.297 "supported_io_types": { 00:17:02.297 "read": true, 00:17:02.297 "write": true, 00:17:02.297 "unmap": true, 00:17:02.297 "flush": true, 00:17:02.297 "reset": true, 00:17:02.297 "nvme_admin": false, 00:17:02.297 "nvme_io": false, 00:17:02.297 "nvme_io_md": false, 00:17:02.297 "write_zeroes": true, 00:17:02.297 "zcopy": true, 00:17:02.297 "get_zone_info": false, 00:17:02.297 "zone_management": false, 00:17:02.297 "zone_append": false, 00:17:02.297 "compare": false, 00:17:02.297 "compare_and_write": false, 00:17:02.297 "abort": true, 00:17:02.297 "seek_hole": false, 00:17:02.297 "seek_data": false, 00:17:02.297 "copy": true, 00:17:02.297 "nvme_iov_md": false 00:17:02.297 }, 00:17:02.297 "memory_domains": [ 00:17:02.297 { 00:17:02.297 "dma_device_id": "system", 00:17:02.297 "dma_device_type": 1 00:17:02.297 }, 00:17:02.297 { 00:17:02.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.297 "dma_device_type": 2 00:17:02.297 } 00:17:02.297 ], 00:17:02.297 "driver_specific": {} 00:17:02.297 } 00:17:02.297 ] 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.297 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.297 "name": "Existed_Raid", 00:17:02.297 "uuid": "32dcd8d7-95ef-40a9-b39e-a4b64f4ecfaf", 00:17:02.297 "strip_size_kb": 0, 00:17:02.297 "state": "configuring", 00:17:02.297 "raid_level": "raid1", 00:17:02.297 "superblock": true, 00:17:02.297 "num_base_bdevs": 2, 00:17:02.297 "num_base_bdevs_discovered": 1, 00:17:02.297 "num_base_bdevs_operational": 2, 00:17:02.297 "base_bdevs_list": [ 00:17:02.297 { 00:17:02.297 "name": "BaseBdev1", 00:17:02.297 "uuid": "89222cf7-682c-4996-b18d-a7a64c0f36ae", 00:17:02.297 "is_configured": true, 00:17:02.297 "data_offset": 256, 00:17:02.297 "data_size": 7936 00:17:02.297 }, 00:17:02.297 { 00:17:02.297 "name": "BaseBdev2", 00:17:02.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.297 "is_configured": false, 00:17:02.297 "data_offset": 0, 00:17:02.297 "data_size": 0 00:17:02.297 } 00:17:02.297 ] 00:17:02.297 }' 00:17:02.298 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.298 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.563 [2024-10-25 17:58:20.942084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.563 [2024-10-25 17:58:20.942200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.563 [2024-10-25 17:58:20.954099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.563 [2024-10-25 17:58:20.955968] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.563 [2024-10-25 17:58:20.956009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.563 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.564 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.564 17:58:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.825 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.825 "name": "Existed_Raid", 00:17:02.825 "uuid": "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c", 00:17:02.825 "strip_size_kb": 0, 00:17:02.825 "state": "configuring", 00:17:02.825 "raid_level": "raid1", 00:17:02.825 "superblock": true, 00:17:02.825 "num_base_bdevs": 2, 00:17:02.825 "num_base_bdevs_discovered": 1, 00:17:02.825 "num_base_bdevs_operational": 2, 00:17:02.825 "base_bdevs_list": [ 00:17:02.825 { 00:17:02.825 "name": "BaseBdev1", 00:17:02.825 "uuid": "89222cf7-682c-4996-b18d-a7a64c0f36ae", 00:17:02.825 "is_configured": true, 00:17:02.825 "data_offset": 256, 00:17:02.825 "data_size": 7936 00:17:02.825 }, 00:17:02.825 { 00:17:02.825 "name": "BaseBdev2", 00:17:02.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.825 "is_configured": false, 00:17:02.825 "data_offset": 0, 00:17:02.825 "data_size": 0 00:17:02.825 } 00:17:02.825 ] 00:17:02.825 }' 00:17:02.825 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.825 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.089 [2024-10-25 17:58:21.483699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.089 [2024-10-25 17:58:21.484118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.089 [2024-10-25 17:58:21.484178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.089 [2024-10-25 17:58:21.484502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:03.089 [2024-10-25 17:58:21.484728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.089 [2024-10-25 17:58:21.484783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:17:03.089 id_bdev 0x617000007e80 00:17:03.089 [2024-10-25 17:58:21.485026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.089 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.089 [ 00:17:03.089 { 00:17:03.089 "name": "BaseBdev2", 00:17:03.089 "aliases": [ 00:17:03.089 "c6d56efc-78f9-42e5-9eff-9171b52108ac" 00:17:03.089 ], 00:17:03.089 "product_name": "Malloc disk", 00:17:03.089 "block_size": 4096, 00:17:03.089 "num_blocks": 8192, 00:17:03.089 "uuid": "c6d56efc-78f9-42e5-9eff-9171b52108ac", 00:17:03.089 "assigned_rate_limits": { 00:17:03.089 "rw_ios_per_sec": 0, 00:17:03.089 "rw_mbytes_per_sec": 0, 00:17:03.089 "r_mbytes_per_sec": 0, 00:17:03.089 "w_mbytes_per_sec": 0 00:17:03.089 }, 00:17:03.089 "claimed": true, 00:17:03.089 "claim_type": "exclusive_write", 00:17:03.089 "zoned": false, 00:17:03.089 "supported_io_types": { 00:17:03.089 "read": true, 00:17:03.089 "write": true, 00:17:03.089 "unmap": true, 00:17:03.089 "flush": true, 00:17:03.089 "reset": true, 00:17:03.089 "nvme_admin": false, 00:17:03.089 "nvme_io": false, 00:17:03.089 "nvme_io_md": false, 00:17:03.089 "write_zeroes": true, 00:17:03.089 "zcopy": true, 00:17:03.089 "get_zone_info": false, 00:17:03.089 "zone_management": false, 00:17:03.089 "zone_append": false, 00:17:03.089 "compare": false, 00:17:03.089 "compare_and_write": false, 00:17:03.089 "abort": true, 00:17:03.089 "seek_hole": false, 00:17:03.089 "seek_data": false, 00:17:03.089 "copy": true, 00:17:03.089 "nvme_iov_md": false 00:17:03.089 }, 00:17:03.089 "memory_domains": [ 00:17:03.089 { 00:17:03.089 "dma_device_id": "system", 00:17:03.089 "dma_device_type": 1 00:17:03.089 }, 00:17:03.089 { 00:17:03.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.089 "dma_device_type": 2 00:17:03.089 } 00:17:03.089 ], 00:17:03.089 "driver_specific": {} 00:17:03.089 } 00:17:03.089 ] 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.349 "name": "Existed_Raid", 00:17:03.349 "uuid": "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c", 00:17:03.349 "strip_size_kb": 0, 00:17:03.349 "state": "online", 00:17:03.349 "raid_level": "raid1", 00:17:03.349 "superblock": true, 00:17:03.349 "num_base_bdevs": 2, 00:17:03.349 "num_base_bdevs_discovered": 2, 00:17:03.349 "num_base_bdevs_operational": 2, 00:17:03.349 "base_bdevs_list": [ 00:17:03.349 { 00:17:03.349 "name": "BaseBdev1", 00:17:03.349 "uuid": "89222cf7-682c-4996-b18d-a7a64c0f36ae", 00:17:03.349 "is_configured": true, 00:17:03.349 "data_offset": 256, 00:17:03.349 "data_size": 7936 00:17:03.349 }, 00:17:03.349 { 00:17:03.349 "name": "BaseBdev2", 00:17:03.349 "uuid": "c6d56efc-78f9-42e5-9eff-9171b52108ac", 00:17:03.349 "is_configured": true, 00:17:03.349 "data_offset": 256, 00:17:03.349 "data_size": 7936 00:17:03.349 } 00:17:03.349 ] 00:17:03.349 }' 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.349 17:58:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.610 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 [2024-10-25 17:58:22.047167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.870 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.870 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.870 "name": "Existed_Raid", 00:17:03.870 "aliases": [ 00:17:03.870 "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c" 00:17:03.870 ], 00:17:03.870 "product_name": "Raid Volume", 00:17:03.870 "block_size": 4096, 00:17:03.870 "num_blocks": 7936, 00:17:03.870 "uuid": "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c", 00:17:03.870 "assigned_rate_limits": { 00:17:03.870 "rw_ios_per_sec": 0, 00:17:03.870 "rw_mbytes_per_sec": 0, 00:17:03.870 "r_mbytes_per_sec": 0, 00:17:03.870 "w_mbytes_per_sec": 0 00:17:03.870 }, 00:17:03.870 "claimed": false, 00:17:03.870 "zoned": false, 00:17:03.870 "supported_io_types": { 00:17:03.870 "read": true, 00:17:03.870 "write": true, 00:17:03.870 "unmap": false, 00:17:03.870 "flush": false, 00:17:03.870 "reset": true, 00:17:03.870 "nvme_admin": false, 00:17:03.870 "nvme_io": false, 00:17:03.870 "nvme_io_md": false, 00:17:03.870 "write_zeroes": true, 00:17:03.870 "zcopy": false, 00:17:03.870 "get_zone_info": false, 00:17:03.870 "zone_management": false, 00:17:03.870 "zone_append": false, 00:17:03.870 "compare": false, 00:17:03.870 "compare_and_write": false, 00:17:03.870 "abort": false, 00:17:03.871 "seek_hole": false, 00:17:03.871 "seek_data": false, 00:17:03.871 "copy": false, 00:17:03.871 "nvme_iov_md": false 00:17:03.871 }, 00:17:03.871 "memory_domains": [ 00:17:03.871 { 00:17:03.871 "dma_device_id": "system", 00:17:03.871 "dma_device_type": 1 00:17:03.871 }, 00:17:03.871 { 00:17:03.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.871 "dma_device_type": 2 00:17:03.871 }, 00:17:03.871 { 00:17:03.871 "dma_device_id": "system", 00:17:03.871 "dma_device_type": 1 00:17:03.871 }, 00:17:03.871 { 00:17:03.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.871 "dma_device_type": 2 00:17:03.871 } 00:17:03.871 ], 00:17:03.871 "driver_specific": { 00:17:03.871 "raid": { 00:17:03.871 "uuid": "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c", 00:17:03.871 "strip_size_kb": 0, 00:17:03.871 "state": "online", 00:17:03.871 "raid_level": "raid1", 00:17:03.871 "superblock": true, 00:17:03.871 "num_base_bdevs": 2, 00:17:03.871 "num_base_bdevs_discovered": 2, 00:17:03.871 "num_base_bdevs_operational": 2, 00:17:03.871 "base_bdevs_list": [ 00:17:03.871 { 00:17:03.871 "name": "BaseBdev1", 00:17:03.871 "uuid": "89222cf7-682c-4996-b18d-a7a64c0f36ae", 00:17:03.871 "is_configured": true, 00:17:03.871 "data_offset": 256, 00:17:03.871 "data_size": 7936 00:17:03.871 }, 00:17:03.871 { 00:17:03.871 "name": "BaseBdev2", 00:17:03.871 "uuid": "c6d56efc-78f9-42e5-9eff-9171b52108ac", 00:17:03.871 "is_configured": true, 00:17:03.871 "data_offset": 256, 00:17:03.871 "data_size": 7936 00:17:03.871 } 00:17:03.871 ] 00:17:03.871 } 00:17:03.871 } 00:17:03.871 }' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.871 BaseBdev2' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.871 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.871 [2024-10-25 17:58:22.278572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.131 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.132 "name": "Existed_Raid", 00:17:04.132 "uuid": "9fb4c044-efb3-4f52-8bad-bdfcc7a24e7c", 00:17:04.132 "strip_size_kb": 0, 00:17:04.132 "state": "online", 00:17:04.132 "raid_level": "raid1", 00:17:04.132 "superblock": true, 00:17:04.132 "num_base_bdevs": 2, 00:17:04.132 "num_base_bdevs_discovered": 1, 00:17:04.132 "num_base_bdevs_operational": 1, 00:17:04.132 "base_bdevs_list": [ 00:17:04.132 { 00:17:04.132 "name": null, 00:17:04.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.132 "is_configured": false, 00:17:04.132 "data_offset": 0, 00:17:04.132 "data_size": 7936 00:17:04.132 }, 00:17:04.132 { 00:17:04.132 "name": "BaseBdev2", 00:17:04.132 "uuid": "c6d56efc-78f9-42e5-9eff-9171b52108ac", 00:17:04.132 "is_configured": true, 00:17:04.132 "data_offset": 256, 00:17:04.132 "data_size": 7936 00:17:04.132 } 00:17:04.132 ] 00:17:04.132 }' 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.132 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.703 17:58:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.703 [2024-10-25 17:58:22.926847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.703 [2024-10-25 17:58:22.926985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.703 [2024-10-25 17:58:23.044094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.703 [2024-10-25 17:58:23.044166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.703 [2024-10-25 17:58:23.044181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85818 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85818 ']' 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85818 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.703 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85818 00:17:04.963 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.963 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.963 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85818' 00:17:04.963 killing process with pid 85818 00:17:04.963 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85818 00:17:04.963 [2024-10-25 17:58:23.144036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.963 17:58:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85818 00:17:04.963 [2024-10-25 17:58:23.164687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.343 ************************************ 00:17:06.343 END TEST raid_state_function_test_sb_4k 00:17:06.343 ************************************ 00:17:06.343 17:58:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:06.343 00:17:06.343 real 0m5.521s 00:17:06.343 user 0m7.862s 00:17:06.343 sys 0m0.928s 00:17:06.343 17:58:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.343 17:58:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.343 17:58:24 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:06.343 17:58:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:06.343 17:58:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.343 17:58:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.343 ************************************ 00:17:06.343 START TEST raid_superblock_test_4k 00:17:06.343 ************************************ 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86076 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86076 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86076 ']' 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.343 17:58:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.343 [2024-10-25 17:58:24.677657] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:06.343 [2024-10-25 17:58:24.677898] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86076 ] 00:17:06.603 [2024-10-25 17:58:24.841402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.603 [2024-10-25 17:58:24.977998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.862 [2024-10-25 17:58:25.219099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.862 [2024-10-25 17:58:25.219135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 malloc1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 [2024-10-25 17:58:25.682063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.431 [2024-10-25 17:58:25.682153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.431 [2024-10-25 17:58:25.682180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.431 [2024-10-25 17:58:25.682191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.431 [2024-10-25 17:58:25.684789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.431 [2024-10-25 17:58:25.684848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.431 pt1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 malloc2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 [2024-10-25 17:58:25.744185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.431 [2024-10-25 17:58:25.744345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.431 [2024-10-25 17:58:25.744387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.431 [2024-10-25 17:58:25.744418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.431 [2024-10-25 17:58:25.746742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.431 [2024-10-25 17:58:25.746818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.431 pt2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.431 [2024-10-25 17:58:25.756246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.431 [2024-10-25 17:58:25.758456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.431 [2024-10-25 17:58:25.758721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.431 [2024-10-25 17:58:25.758779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.431 [2024-10-25 17:58:25.759111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:07.431 [2024-10-25 17:58:25.759353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.431 [2024-10-25 17:58:25.759410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.431 [2024-10-25 17:58:25.759658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.431 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.432 "name": "raid_bdev1", 00:17:07.432 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:07.432 "strip_size_kb": 0, 00:17:07.432 "state": "online", 00:17:07.432 "raid_level": "raid1", 00:17:07.432 "superblock": true, 00:17:07.432 "num_base_bdevs": 2, 00:17:07.432 "num_base_bdevs_discovered": 2, 00:17:07.432 "num_base_bdevs_operational": 2, 00:17:07.432 "base_bdevs_list": [ 00:17:07.432 { 00:17:07.432 "name": "pt1", 00:17:07.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.432 "is_configured": true, 00:17:07.432 "data_offset": 256, 00:17:07.432 "data_size": 7936 00:17:07.432 }, 00:17:07.432 { 00:17:07.432 "name": "pt2", 00:17:07.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.432 "is_configured": true, 00:17:07.432 "data_offset": 256, 00:17:07.432 "data_size": 7936 00:17:07.432 } 00:17:07.432 ] 00:17:07.432 }' 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.432 17:58:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.001 [2024-10-25 17:58:26.267711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.001 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.001 "name": "raid_bdev1", 00:17:08.001 "aliases": [ 00:17:08.001 "3a816599-6f48-43e2-a82c-fa18fb5f68f1" 00:17:08.001 ], 00:17:08.001 "product_name": "Raid Volume", 00:17:08.001 "block_size": 4096, 00:17:08.001 "num_blocks": 7936, 00:17:08.001 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:08.001 "assigned_rate_limits": { 00:17:08.001 "rw_ios_per_sec": 0, 00:17:08.001 "rw_mbytes_per_sec": 0, 00:17:08.001 "r_mbytes_per_sec": 0, 00:17:08.001 "w_mbytes_per_sec": 0 00:17:08.001 }, 00:17:08.001 "claimed": false, 00:17:08.001 "zoned": false, 00:17:08.001 "supported_io_types": { 00:17:08.001 "read": true, 00:17:08.001 "write": true, 00:17:08.002 "unmap": false, 00:17:08.002 "flush": false, 00:17:08.002 "reset": true, 00:17:08.002 "nvme_admin": false, 00:17:08.002 "nvme_io": false, 00:17:08.002 "nvme_io_md": false, 00:17:08.002 "write_zeroes": true, 00:17:08.002 "zcopy": false, 00:17:08.002 "get_zone_info": false, 00:17:08.002 "zone_management": false, 00:17:08.002 "zone_append": false, 00:17:08.002 "compare": false, 00:17:08.002 "compare_and_write": false, 00:17:08.002 "abort": false, 00:17:08.002 "seek_hole": false, 00:17:08.002 "seek_data": false, 00:17:08.002 "copy": false, 00:17:08.002 "nvme_iov_md": false 00:17:08.002 }, 00:17:08.002 "memory_domains": [ 00:17:08.002 { 00:17:08.002 "dma_device_id": "system", 00:17:08.002 "dma_device_type": 1 00:17:08.002 }, 00:17:08.002 { 00:17:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.002 "dma_device_type": 2 00:17:08.002 }, 00:17:08.002 { 00:17:08.002 "dma_device_id": "system", 00:17:08.002 "dma_device_type": 1 00:17:08.002 }, 00:17:08.002 { 00:17:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.002 "dma_device_type": 2 00:17:08.002 } 00:17:08.002 ], 00:17:08.002 "driver_specific": { 00:17:08.002 "raid": { 00:17:08.002 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:08.002 "strip_size_kb": 0, 00:17:08.002 "state": "online", 00:17:08.002 "raid_level": "raid1", 00:17:08.002 "superblock": true, 00:17:08.002 "num_base_bdevs": 2, 00:17:08.002 "num_base_bdevs_discovered": 2, 00:17:08.002 "num_base_bdevs_operational": 2, 00:17:08.002 "base_bdevs_list": [ 00:17:08.002 { 00:17:08.002 "name": "pt1", 00:17:08.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.002 "is_configured": true, 00:17:08.002 "data_offset": 256, 00:17:08.002 "data_size": 7936 00:17:08.002 }, 00:17:08.002 { 00:17:08.002 "name": "pt2", 00:17:08.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.002 "is_configured": true, 00:17:08.002 "data_offset": 256, 00:17:08.002 "data_size": 7936 00:17:08.002 } 00:17:08.002 ] 00:17:08.002 } 00:17:08.002 } 00:17:08.002 }' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:08.002 pt2' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.262 [2024-10-25 17:58:26.471351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a816599-6f48-43e2-a82c-fa18fb5f68f1 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 3a816599-6f48-43e2-a82c-fa18fb5f68f1 ']' 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.262 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 [2024-10-25 17:58:26.506993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.263 [2024-10-25 17:58:26.507039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.263 [2024-10-25 17:58:26.507146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.263 [2024-10-25 17:58:26.507217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.263 [2024-10-25 17:58:26.507231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 [2024-10-25 17:58:26.654884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:08.263 [2024-10-25 17:58:26.657077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:08.263 [2024-10-25 17:58:26.657164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:08.263 [2024-10-25 17:58:26.657235] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:08.263 [2024-10-25 17:58:26.657253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.263 [2024-10-25 17:58:26.657265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:08.263 request: 00:17:08.263 { 00:17:08.263 "name": "raid_bdev1", 00:17:08.263 "raid_level": "raid1", 00:17:08.263 "base_bdevs": [ 00:17:08.263 "malloc1", 00:17:08.263 "malloc2" 00:17:08.263 ], 00:17:08.263 "superblock": false, 00:17:08.263 "method": "bdev_raid_create", 00:17:08.263 "req_id": 1 00:17:08.263 } 00:17:08.263 Got JSON-RPC error response 00:17:08.263 response: 00:17:08.263 { 00:17:08.263 "code": -17, 00:17:08.263 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:08.263 } 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.523 [2024-10-25 17:58:26.718719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.523 [2024-10-25 17:58:26.718918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.523 [2024-10-25 17:58:26.718963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:08.523 [2024-10-25 17:58:26.719007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.523 [2024-10-25 17:58:26.721586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.523 [2024-10-25 17:58:26.721684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.523 [2024-10-25 17:58:26.721817] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.523 [2024-10-25 17:58:26.721935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.523 pt1 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.523 "name": "raid_bdev1", 00:17:08.523 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:08.523 "strip_size_kb": 0, 00:17:08.523 "state": "configuring", 00:17:08.523 "raid_level": "raid1", 00:17:08.523 "superblock": true, 00:17:08.523 "num_base_bdevs": 2, 00:17:08.523 "num_base_bdevs_discovered": 1, 00:17:08.523 "num_base_bdevs_operational": 2, 00:17:08.523 "base_bdevs_list": [ 00:17:08.523 { 00:17:08.523 "name": "pt1", 00:17:08.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.523 "is_configured": true, 00:17:08.523 "data_offset": 256, 00:17:08.523 "data_size": 7936 00:17:08.523 }, 00:17:08.523 { 00:17:08.523 "name": null, 00:17:08.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.523 "is_configured": false, 00:17:08.524 "data_offset": 256, 00:17:08.524 "data_size": 7936 00:17:08.524 } 00:17:08.524 ] 00:17:08.524 }' 00:17:08.524 17:58:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.524 17:58:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.783 [2024-10-25 17:58:27.158026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.783 [2024-10-25 17:58:27.158234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.783 [2024-10-25 17:58:27.158264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:08.783 [2024-10-25 17:58:27.158279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.783 [2024-10-25 17:58:27.158812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.783 [2024-10-25 17:58:27.158862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.783 [2024-10-25 17:58:27.158960] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.783 [2024-10-25 17:58:27.158990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.783 [2024-10-25 17:58:27.159122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.783 [2024-10-25 17:58:27.159146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.783 [2024-10-25 17:58:27.159417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:08.783 [2024-10-25 17:58:27.159594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.783 [2024-10-25 17:58:27.159606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:08.783 [2024-10-25 17:58:27.159761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.783 pt2 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.783 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.784 "name": "raid_bdev1", 00:17:08.784 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:08.784 "strip_size_kb": 0, 00:17:08.784 "state": "online", 00:17:08.784 "raid_level": "raid1", 00:17:08.784 "superblock": true, 00:17:08.784 "num_base_bdevs": 2, 00:17:08.784 "num_base_bdevs_discovered": 2, 00:17:08.784 "num_base_bdevs_operational": 2, 00:17:08.784 "base_bdevs_list": [ 00:17:08.784 { 00:17:08.784 "name": "pt1", 00:17:08.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.784 "is_configured": true, 00:17:08.784 "data_offset": 256, 00:17:08.784 "data_size": 7936 00:17:08.784 }, 00:17:08.784 { 00:17:08.784 "name": "pt2", 00:17:08.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.784 "is_configured": true, 00:17:08.784 "data_offset": 256, 00:17:08.784 "data_size": 7936 00:17:08.784 } 00:17:08.784 ] 00:17:08.784 }' 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.784 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.423 [2024-10-25 17:58:27.609753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.423 "name": "raid_bdev1", 00:17:09.423 "aliases": [ 00:17:09.423 "3a816599-6f48-43e2-a82c-fa18fb5f68f1" 00:17:09.423 ], 00:17:09.423 "product_name": "Raid Volume", 00:17:09.423 "block_size": 4096, 00:17:09.423 "num_blocks": 7936, 00:17:09.423 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:09.423 "assigned_rate_limits": { 00:17:09.423 "rw_ios_per_sec": 0, 00:17:09.423 "rw_mbytes_per_sec": 0, 00:17:09.423 "r_mbytes_per_sec": 0, 00:17:09.423 "w_mbytes_per_sec": 0 00:17:09.423 }, 00:17:09.423 "claimed": false, 00:17:09.423 "zoned": false, 00:17:09.423 "supported_io_types": { 00:17:09.423 "read": true, 00:17:09.423 "write": true, 00:17:09.423 "unmap": false, 00:17:09.423 "flush": false, 00:17:09.423 "reset": true, 00:17:09.423 "nvme_admin": false, 00:17:09.423 "nvme_io": false, 00:17:09.423 "nvme_io_md": false, 00:17:09.423 "write_zeroes": true, 00:17:09.423 "zcopy": false, 00:17:09.423 "get_zone_info": false, 00:17:09.423 "zone_management": false, 00:17:09.423 "zone_append": false, 00:17:09.423 "compare": false, 00:17:09.423 "compare_and_write": false, 00:17:09.423 "abort": false, 00:17:09.423 "seek_hole": false, 00:17:09.423 "seek_data": false, 00:17:09.423 "copy": false, 00:17:09.423 "nvme_iov_md": false 00:17:09.423 }, 00:17:09.423 "memory_domains": [ 00:17:09.423 { 00:17:09.423 "dma_device_id": "system", 00:17:09.423 "dma_device_type": 1 00:17:09.423 }, 00:17:09.423 { 00:17:09.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.423 "dma_device_type": 2 00:17:09.423 }, 00:17:09.423 { 00:17:09.423 "dma_device_id": "system", 00:17:09.423 "dma_device_type": 1 00:17:09.423 }, 00:17:09.423 { 00:17:09.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.423 "dma_device_type": 2 00:17:09.423 } 00:17:09.423 ], 00:17:09.423 "driver_specific": { 00:17:09.423 "raid": { 00:17:09.423 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:09.423 "strip_size_kb": 0, 00:17:09.423 "state": "online", 00:17:09.423 "raid_level": "raid1", 00:17:09.423 "superblock": true, 00:17:09.423 "num_base_bdevs": 2, 00:17:09.423 "num_base_bdevs_discovered": 2, 00:17:09.423 "num_base_bdevs_operational": 2, 00:17:09.423 "base_bdevs_list": [ 00:17:09.423 { 00:17:09.423 "name": "pt1", 00:17:09.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.423 "is_configured": true, 00:17:09.423 "data_offset": 256, 00:17:09.423 "data_size": 7936 00:17:09.423 }, 00:17:09.423 { 00:17:09.423 "name": "pt2", 00:17:09.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.423 "is_configured": true, 00:17:09.423 "data_offset": 256, 00:17:09.423 "data_size": 7936 00:17:09.423 } 00:17:09.423 ] 00:17:09.423 } 00:17:09.423 } 00:17:09.423 }' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:09.423 pt2' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.423 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.424 [2024-10-25 17:58:27.813355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 3a816599-6f48-43e2-a82c-fa18fb5f68f1 '!=' 3a816599-6f48-43e2-a82c-fa18fb5f68f1 ']' 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.424 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.424 [2024-10-25 17:58:27.853058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.684 "name": "raid_bdev1", 00:17:09.684 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:09.684 "strip_size_kb": 0, 00:17:09.684 "state": "online", 00:17:09.684 "raid_level": "raid1", 00:17:09.684 "superblock": true, 00:17:09.684 "num_base_bdevs": 2, 00:17:09.684 "num_base_bdevs_discovered": 1, 00:17:09.684 "num_base_bdevs_operational": 1, 00:17:09.684 "base_bdevs_list": [ 00:17:09.684 { 00:17:09.684 "name": null, 00:17:09.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.684 "is_configured": false, 00:17:09.684 "data_offset": 0, 00:17:09.684 "data_size": 7936 00:17:09.684 }, 00:17:09.684 { 00:17:09.684 "name": "pt2", 00:17:09.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.684 "is_configured": true, 00:17:09.684 "data_offset": 256, 00:17:09.684 "data_size": 7936 00:17:09.684 } 00:17:09.684 ] 00:17:09.684 }' 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.684 17:58:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 [2024-10-25 17:58:28.276428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.945 [2024-10-25 17:58:28.276579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.945 [2024-10-25 17:58:28.276700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.945 [2024-10-25 17:58:28.276779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.945 [2024-10-25 17:58:28.276856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 [2024-10-25 17:58:28.352268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.945 [2024-10-25 17:58:28.352358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.945 [2024-10-25 17:58:28.352380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:09.945 [2024-10-25 17:58:28.352393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.945 [2024-10-25 17:58:28.354953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.945 [2024-10-25 17:58:28.355001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.945 [2024-10-25 17:58:28.355087] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.945 [2024-10-25 17:58:28.355145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.945 [2024-10-25 17:58:28.355266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.945 [2024-10-25 17:58:28.355281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:09.945 [2024-10-25 17:58:28.355553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:09.945 [2024-10-25 17:58:28.355752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.945 [2024-10-25 17:58:28.355775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:09.945 [2024-10-25 17:58:28.355955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.945 pt2 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.945 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.205 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.205 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.205 "name": "raid_bdev1", 00:17:10.205 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:10.205 "strip_size_kb": 0, 00:17:10.205 "state": "online", 00:17:10.205 "raid_level": "raid1", 00:17:10.205 "superblock": true, 00:17:10.205 "num_base_bdevs": 2, 00:17:10.205 "num_base_bdevs_discovered": 1, 00:17:10.205 "num_base_bdevs_operational": 1, 00:17:10.205 "base_bdevs_list": [ 00:17:10.205 { 00:17:10.205 "name": null, 00:17:10.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.205 "is_configured": false, 00:17:10.205 "data_offset": 256, 00:17:10.205 "data_size": 7936 00:17:10.205 }, 00:17:10.205 { 00:17:10.205 "name": "pt2", 00:17:10.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.205 "is_configured": true, 00:17:10.205 "data_offset": 256, 00:17:10.205 "data_size": 7936 00:17:10.205 } 00:17:10.205 ] 00:17:10.205 }' 00:17:10.205 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.205 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.465 [2024-10-25 17:58:28.839488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.465 [2024-10-25 17:58:28.839622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.465 [2024-10-25 17:58:28.839732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.465 [2024-10-25 17:58:28.839809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.465 [2024-10-25 17:58:28.839869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.465 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.725 [2024-10-25 17:58:28.903407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.725 [2024-10-25 17:58:28.903576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.725 [2024-10-25 17:58:28.903622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:10.725 [2024-10-25 17:58:28.903664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.725 [2024-10-25 17:58:28.906256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.725 [2024-10-25 17:58:28.906351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.725 [2024-10-25 17:58:28.906473] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:10.725 [2024-10-25 17:58:28.906550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.725 [2024-10-25 17:58:28.906727] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:10.725 [2024-10-25 17:58:28.906740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.725 [2024-10-25 17:58:28.906759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:10.725 [2024-10-25 17:58:28.906867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.725 [2024-10-25 17:58:28.906973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:10.725 [2024-10-25 17:58:28.906982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:10.725 [2024-10-25 17:58:28.907275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:10.725 [2024-10-25 17:58:28.907431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:10.725 [2024-10-25 17:58:28.907445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:10.725 [2024-10-25 17:58:28.907658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.725 pt1 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.725 "name": "raid_bdev1", 00:17:10.725 "uuid": "3a816599-6f48-43e2-a82c-fa18fb5f68f1", 00:17:10.725 "strip_size_kb": 0, 00:17:10.725 "state": "online", 00:17:10.725 "raid_level": "raid1", 00:17:10.725 "superblock": true, 00:17:10.725 "num_base_bdevs": 2, 00:17:10.725 "num_base_bdevs_discovered": 1, 00:17:10.725 "num_base_bdevs_operational": 1, 00:17:10.725 "base_bdevs_list": [ 00:17:10.725 { 00:17:10.725 "name": null, 00:17:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.725 "is_configured": false, 00:17:10.725 "data_offset": 256, 00:17:10.725 "data_size": 7936 00:17:10.725 }, 00:17:10.725 { 00:17:10.725 "name": "pt2", 00:17:10.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.725 "is_configured": true, 00:17:10.725 "data_offset": 256, 00:17:10.725 "data_size": 7936 00:17:10.725 } 00:17:10.725 ] 00:17:10.725 }' 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.725 17:58:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:10.985 [2024-10-25 17:58:29.411060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.985 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 3a816599-6f48-43e2-a82c-fa18fb5f68f1 '!=' 3a816599-6f48-43e2-a82c-fa18fb5f68f1 ']' 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86076 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86076 ']' 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86076 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86076 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.248 killing process with pid 86076 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86076' 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86076 00:17:11.248 [2024-10-25 17:58:29.492752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.248 [2024-10-25 17:58:29.492883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.248 [2024-10-25 17:58:29.492938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.248 [2024-10-25 17:58:29.492954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:11.248 17:58:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86076 00:17:11.519 [2024-10-25 17:58:29.746460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.903 17:58:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:12.903 00:17:12.903 real 0m6.516s 00:17:12.903 user 0m9.656s 00:17:12.903 sys 0m1.195s 00:17:12.903 17:58:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.903 17:58:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.903 ************************************ 00:17:12.903 END TEST raid_superblock_test_4k 00:17:12.903 ************************************ 00:17:12.903 17:58:31 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:12.903 17:58:31 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:12.903 17:58:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:12.903 17:58:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.903 17:58:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.903 ************************************ 00:17:12.903 START TEST raid_rebuild_test_sb_4k 00:17:12.903 ************************************ 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86404 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86404 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86404 ']' 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.903 17:58:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.903 [2024-10-25 17:58:31.283658] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:12.903 [2024-10-25 17:58:31.283901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86404 ] 00:17:12.903 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:12.903 Zero copy mechanism will not be used. 00:17:13.163 [2024-10-25 17:58:31.462182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.163 [2024-10-25 17:58:31.598266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.422 [2024-10-25 17:58:31.839539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.422 [2024-10-25 17:58:31.839682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 BaseBdev1_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 [2024-10-25 17:58:32.194099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.992 [2024-10-25 17:58:32.194190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.992 [2024-10-25 17:58:32.194214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.992 [2024-10-25 17:58:32.194229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.992 [2024-10-25 17:58:32.196732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.992 [2024-10-25 17:58:32.196780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.992 BaseBdev1 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 BaseBdev2_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 [2024-10-25 17:58:32.255638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:13.992 [2024-10-25 17:58:32.255720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.992 [2024-10-25 17:58:32.255742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.992 [2024-10-25 17:58:32.255757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.992 [2024-10-25 17:58:32.258177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.992 [2024-10-25 17:58:32.258318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:13.992 BaseBdev2 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 spare_malloc 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 spare_delay 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 [2024-10-25 17:58:32.341188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.992 [2024-10-25 17:58:32.341278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.992 [2024-10-25 17:58:32.341305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:13.992 [2024-10-25 17:58:32.341319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.992 [2024-10-25 17:58:32.343922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.992 [2024-10-25 17:58:32.343974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.992 spare 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.992 [2024-10-25 17:58:32.353221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.992 [2024-10-25 17:58:32.355468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.992 [2024-10-25 17:58:32.355686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.992 [2024-10-25 17:58:32.355707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.992 [2024-10-25 17:58:32.356028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:13.992 [2024-10-25 17:58:32.356227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.992 [2024-10-25 17:58:32.356248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.992 [2024-10-25 17:58:32.356446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.992 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.993 "name": "raid_bdev1", 00:17:13.993 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:13.993 "strip_size_kb": 0, 00:17:13.993 "state": "online", 00:17:13.993 "raid_level": "raid1", 00:17:13.993 "superblock": true, 00:17:13.993 "num_base_bdevs": 2, 00:17:13.993 "num_base_bdevs_discovered": 2, 00:17:13.993 "num_base_bdevs_operational": 2, 00:17:13.993 "base_bdevs_list": [ 00:17:13.993 { 00:17:13.993 "name": "BaseBdev1", 00:17:13.993 "uuid": "22fb3ee8-0ae2-5c0a-a698-f29dcb434ea7", 00:17:13.993 "is_configured": true, 00:17:13.993 "data_offset": 256, 00:17:13.993 "data_size": 7936 00:17:13.993 }, 00:17:13.993 { 00:17:13.993 "name": "BaseBdev2", 00:17:13.993 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:13.993 "is_configured": true, 00:17:13.993 "data_offset": 256, 00:17:13.993 "data_size": 7936 00:17:13.993 } 00:17:13.993 ] 00:17:13.993 }' 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.993 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 [2024-10-25 17:58:32.857029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.563 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:14.564 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.564 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.564 17:58:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:14.824 [2024-10-25 17:58:33.148214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:14.824 /dev/nbd0 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.824 1+0 records in 00:17:14.824 1+0 records out 00:17:14.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614217 s, 6.7 MB/s 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:14.824 17:58:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:15.764 7936+0 records in 00:17:15.764 7936+0 records out 00:17:15.764 32505856 bytes (33 MB, 31 MiB) copied, 0.808916 s, 40.2 MB/s 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.764 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.024 [2024-10-25 17:58:34.265867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.024 [2024-10-25 17:58:34.281946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.024 "name": "raid_bdev1", 00:17:16.024 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:16.024 "strip_size_kb": 0, 00:17:16.024 "state": "online", 00:17:16.024 "raid_level": "raid1", 00:17:16.024 "superblock": true, 00:17:16.024 "num_base_bdevs": 2, 00:17:16.024 "num_base_bdevs_discovered": 1, 00:17:16.024 "num_base_bdevs_operational": 1, 00:17:16.024 "base_bdevs_list": [ 00:17:16.024 { 00:17:16.024 "name": null, 00:17:16.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.024 "is_configured": false, 00:17:16.024 "data_offset": 0, 00:17:16.024 "data_size": 7936 00:17:16.024 }, 00:17:16.024 { 00:17:16.024 "name": "BaseBdev2", 00:17:16.024 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:16.024 "is_configured": true, 00:17:16.024 "data_offset": 256, 00:17:16.024 "data_size": 7936 00:17:16.024 } 00:17:16.024 ] 00:17:16.024 }' 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.024 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.598 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.598 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.598 [2024-10-25 17:58:34.769157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.598 [2024-10-25 17:58:34.789190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:16.598 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.598 17:58:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:16.598 [2024-10-25 17:58:34.791521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.537 "name": "raid_bdev1", 00:17:17.537 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:17.537 "strip_size_kb": 0, 00:17:17.537 "state": "online", 00:17:17.537 "raid_level": "raid1", 00:17:17.537 "superblock": true, 00:17:17.537 "num_base_bdevs": 2, 00:17:17.537 "num_base_bdevs_discovered": 2, 00:17:17.537 "num_base_bdevs_operational": 2, 00:17:17.537 "process": { 00:17:17.537 "type": "rebuild", 00:17:17.537 "target": "spare", 00:17:17.537 "progress": { 00:17:17.537 "blocks": 2560, 00:17:17.537 "percent": 32 00:17:17.537 } 00:17:17.537 }, 00:17:17.537 "base_bdevs_list": [ 00:17:17.537 { 00:17:17.537 "name": "spare", 00:17:17.537 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:17.537 "is_configured": true, 00:17:17.537 "data_offset": 256, 00:17:17.537 "data_size": 7936 00:17:17.537 }, 00:17:17.537 { 00:17:17.537 "name": "BaseBdev2", 00:17:17.537 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:17.537 "is_configured": true, 00:17:17.537 "data_offset": 256, 00:17:17.537 "data_size": 7936 00:17:17.537 } 00:17:17.537 ] 00:17:17.537 }' 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.537 17:58:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.537 [2024-10-25 17:58:35.962489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.797 [2024-10-25 17:58:35.997533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.797 [2024-10-25 17:58:35.997630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.797 [2024-10-25 17:58:35.997648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.797 [2024-10-25 17:58:35.997660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.797 "name": "raid_bdev1", 00:17:17.797 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:17.797 "strip_size_kb": 0, 00:17:17.797 "state": "online", 00:17:17.797 "raid_level": "raid1", 00:17:17.797 "superblock": true, 00:17:17.797 "num_base_bdevs": 2, 00:17:17.797 "num_base_bdevs_discovered": 1, 00:17:17.797 "num_base_bdevs_operational": 1, 00:17:17.797 "base_bdevs_list": [ 00:17:17.797 { 00:17:17.797 "name": null, 00:17:17.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.797 "is_configured": false, 00:17:17.797 "data_offset": 0, 00:17:17.797 "data_size": 7936 00:17:17.797 }, 00:17:17.797 { 00:17:17.797 "name": "BaseBdev2", 00:17:17.797 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 256, 00:17:17.797 "data_size": 7936 00:17:17.797 } 00:17:17.797 ] 00:17:17.797 }' 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.797 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.092 "name": "raid_bdev1", 00:17:18.092 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:18.092 "strip_size_kb": 0, 00:17:18.092 "state": "online", 00:17:18.092 "raid_level": "raid1", 00:17:18.092 "superblock": true, 00:17:18.092 "num_base_bdevs": 2, 00:17:18.092 "num_base_bdevs_discovered": 1, 00:17:18.092 "num_base_bdevs_operational": 1, 00:17:18.092 "base_bdevs_list": [ 00:17:18.092 { 00:17:18.092 "name": null, 00:17:18.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.092 "is_configured": false, 00:17:18.092 "data_offset": 0, 00:17:18.092 "data_size": 7936 00:17:18.092 }, 00:17:18.092 { 00:17:18.092 "name": "BaseBdev2", 00:17:18.092 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:18.092 "is_configured": true, 00:17:18.092 "data_offset": 256, 00:17:18.092 "data_size": 7936 00:17:18.092 } 00:17:18.092 ] 00:17:18.092 }' 00:17:18.092 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.351 [2024-10-25 17:58:36.594628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.351 [2024-10-25 17:58:36.613711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.351 17:58:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.351 [2024-10-25 17:58:36.615998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.288 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.288 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.288 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.288 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.288 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.289 "name": "raid_bdev1", 00:17:19.289 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:19.289 "strip_size_kb": 0, 00:17:19.289 "state": "online", 00:17:19.289 "raid_level": "raid1", 00:17:19.289 "superblock": true, 00:17:19.289 "num_base_bdevs": 2, 00:17:19.289 "num_base_bdevs_discovered": 2, 00:17:19.289 "num_base_bdevs_operational": 2, 00:17:19.289 "process": { 00:17:19.289 "type": "rebuild", 00:17:19.289 "target": "spare", 00:17:19.289 "progress": { 00:17:19.289 "blocks": 2560, 00:17:19.289 "percent": 32 00:17:19.289 } 00:17:19.289 }, 00:17:19.289 "base_bdevs_list": [ 00:17:19.289 { 00:17:19.289 "name": "spare", 00:17:19.289 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:19.289 "is_configured": true, 00:17:19.289 "data_offset": 256, 00:17:19.289 "data_size": 7936 00:17:19.289 }, 00:17:19.289 { 00:17:19.289 "name": "BaseBdev2", 00:17:19.289 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:19.289 "is_configured": true, 00:17:19.289 "data_offset": 256, 00:17:19.289 "data_size": 7936 00:17:19.289 } 00:17:19.289 ] 00:17:19.289 }' 00:17:19.289 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:19.548 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.548 "name": "raid_bdev1", 00:17:19.548 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:19.548 "strip_size_kb": 0, 00:17:19.548 "state": "online", 00:17:19.548 "raid_level": "raid1", 00:17:19.548 "superblock": true, 00:17:19.548 "num_base_bdevs": 2, 00:17:19.548 "num_base_bdevs_discovered": 2, 00:17:19.548 "num_base_bdevs_operational": 2, 00:17:19.548 "process": { 00:17:19.548 "type": "rebuild", 00:17:19.548 "target": "spare", 00:17:19.548 "progress": { 00:17:19.548 "blocks": 2816, 00:17:19.548 "percent": 35 00:17:19.548 } 00:17:19.548 }, 00:17:19.548 "base_bdevs_list": [ 00:17:19.548 { 00:17:19.548 "name": "spare", 00:17:19.548 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:19.548 "is_configured": true, 00:17:19.548 "data_offset": 256, 00:17:19.548 "data_size": 7936 00:17:19.548 }, 00:17:19.548 { 00:17:19.548 "name": "BaseBdev2", 00:17:19.548 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:19.548 "is_configured": true, 00:17:19.548 "data_offset": 256, 00:17:19.548 "data_size": 7936 00:17:19.548 } 00:17:19.548 ] 00:17:19.548 }' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.548 17:58:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.927 17:58:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.927 "name": "raid_bdev1", 00:17:20.927 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:20.927 "strip_size_kb": 0, 00:17:20.927 "state": "online", 00:17:20.927 "raid_level": "raid1", 00:17:20.927 "superblock": true, 00:17:20.927 "num_base_bdevs": 2, 00:17:20.927 "num_base_bdevs_discovered": 2, 00:17:20.927 "num_base_bdevs_operational": 2, 00:17:20.927 "process": { 00:17:20.927 "type": "rebuild", 00:17:20.927 "target": "spare", 00:17:20.927 "progress": { 00:17:20.927 "blocks": 5888, 00:17:20.927 "percent": 74 00:17:20.927 } 00:17:20.927 }, 00:17:20.927 "base_bdevs_list": [ 00:17:20.927 { 00:17:20.927 "name": "spare", 00:17:20.927 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:20.927 "is_configured": true, 00:17:20.927 "data_offset": 256, 00:17:20.927 "data_size": 7936 00:17:20.927 }, 00:17:20.927 { 00:17:20.927 "name": "BaseBdev2", 00:17:20.927 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:20.927 "is_configured": true, 00:17:20.927 "data_offset": 256, 00:17:20.927 "data_size": 7936 00:17:20.927 } 00:17:20.927 ] 00:17:20.927 }' 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.927 17:58:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.495 [2024-10-25 17:58:39.730381] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.495 [2024-10-25 17:58:39.730470] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.495 [2024-10-25 17:58:39.730593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.753 "name": "raid_bdev1", 00:17:21.753 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:21.753 "strip_size_kb": 0, 00:17:21.753 "state": "online", 00:17:21.753 "raid_level": "raid1", 00:17:21.753 "superblock": true, 00:17:21.753 "num_base_bdevs": 2, 00:17:21.753 "num_base_bdevs_discovered": 2, 00:17:21.753 "num_base_bdevs_operational": 2, 00:17:21.753 "base_bdevs_list": [ 00:17:21.753 { 00:17:21.753 "name": "spare", 00:17:21.753 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:21.753 "is_configured": true, 00:17:21.753 "data_offset": 256, 00:17:21.753 "data_size": 7936 00:17:21.753 }, 00:17:21.753 { 00:17:21.753 "name": "BaseBdev2", 00:17:21.753 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:21.753 "is_configured": true, 00:17:21.753 "data_offset": 256, 00:17:21.753 "data_size": 7936 00:17:21.753 } 00:17:21.753 ] 00:17:21.753 }' 00:17:21.753 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.011 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.012 "name": "raid_bdev1", 00:17:22.012 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:22.012 "strip_size_kb": 0, 00:17:22.012 "state": "online", 00:17:22.012 "raid_level": "raid1", 00:17:22.012 "superblock": true, 00:17:22.012 "num_base_bdevs": 2, 00:17:22.012 "num_base_bdevs_discovered": 2, 00:17:22.012 "num_base_bdevs_operational": 2, 00:17:22.012 "base_bdevs_list": [ 00:17:22.012 { 00:17:22.012 "name": "spare", 00:17:22.012 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:22.012 "is_configured": true, 00:17:22.012 "data_offset": 256, 00:17:22.012 "data_size": 7936 00:17:22.012 }, 00:17:22.012 { 00:17:22.012 "name": "BaseBdev2", 00:17:22.012 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:22.012 "is_configured": true, 00:17:22.012 "data_offset": 256, 00:17:22.012 "data_size": 7936 00:17:22.012 } 00:17:22.012 ] 00:17:22.012 }' 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.012 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.270 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.270 "name": "raid_bdev1", 00:17:22.271 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:22.271 "strip_size_kb": 0, 00:17:22.271 "state": "online", 00:17:22.271 "raid_level": "raid1", 00:17:22.271 "superblock": true, 00:17:22.271 "num_base_bdevs": 2, 00:17:22.271 "num_base_bdevs_discovered": 2, 00:17:22.271 "num_base_bdevs_operational": 2, 00:17:22.271 "base_bdevs_list": [ 00:17:22.271 { 00:17:22.271 "name": "spare", 00:17:22.271 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:22.271 "is_configured": true, 00:17:22.271 "data_offset": 256, 00:17:22.271 "data_size": 7936 00:17:22.271 }, 00:17:22.271 { 00:17:22.271 "name": "BaseBdev2", 00:17:22.271 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:22.271 "is_configured": true, 00:17:22.271 "data_offset": 256, 00:17:22.271 "data_size": 7936 00:17:22.271 } 00:17:22.271 ] 00:17:22.271 }' 00:17:22.271 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.271 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 [2024-10-25 17:58:40.853213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.632 [2024-10-25 17:58:40.853320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.632 [2024-10-25 17:58:40.853438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.632 [2024-10-25 17:58:40.853550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.632 [2024-10-25 17:58:40.853607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.632 17:58:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:22.891 /dev/nbd0 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.891 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.892 1+0 records in 00:17:22.892 1+0 records out 00:17:22.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568031 s, 7.2 MB/s 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.892 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.151 /dev/nbd1 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.151 1+0 records in 00:17:23.151 1+0 records out 00:17:23.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371825 s, 11.0 MB/s 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.151 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.410 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.670 17:58:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:23.930 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.931 [2024-10-25 17:58:42.243474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.931 [2024-10-25 17:58:42.243550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.931 [2024-10-25 17:58:42.243578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:23.931 [2024-10-25 17:58:42.243591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.931 [2024-10-25 17:58:42.246231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.931 [2024-10-25 17:58:42.246276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.931 [2024-10-25 17:58:42.246376] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.931 [2024-10-25 17:58:42.246435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.931 [2024-10-25 17:58:42.246616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.931 spare 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.931 [2024-10-25 17:58:42.346534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:23.931 [2024-10-25 17:58:42.346578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.931 [2024-10-25 17:58:42.346947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:23.931 [2024-10-25 17:58:42.347183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:23.931 [2024-10-25 17:58:42.347205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:23.931 [2024-10-25 17:58:42.347432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.931 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.191 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.191 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.191 "name": "raid_bdev1", 00:17:24.191 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:24.191 "strip_size_kb": 0, 00:17:24.191 "state": "online", 00:17:24.191 "raid_level": "raid1", 00:17:24.191 "superblock": true, 00:17:24.191 "num_base_bdevs": 2, 00:17:24.191 "num_base_bdevs_discovered": 2, 00:17:24.191 "num_base_bdevs_operational": 2, 00:17:24.191 "base_bdevs_list": [ 00:17:24.191 { 00:17:24.191 "name": "spare", 00:17:24.191 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:24.191 "is_configured": true, 00:17:24.191 "data_offset": 256, 00:17:24.191 "data_size": 7936 00:17:24.191 }, 00:17:24.191 { 00:17:24.191 "name": "BaseBdev2", 00:17:24.191 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:24.191 "is_configured": true, 00:17:24.191 "data_offset": 256, 00:17:24.191 "data_size": 7936 00:17:24.191 } 00:17:24.191 ] 00:17:24.191 }' 00:17:24.191 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.191 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.450 "name": "raid_bdev1", 00:17:24.450 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:24.450 "strip_size_kb": 0, 00:17:24.450 "state": "online", 00:17:24.450 "raid_level": "raid1", 00:17:24.450 "superblock": true, 00:17:24.450 "num_base_bdevs": 2, 00:17:24.450 "num_base_bdevs_discovered": 2, 00:17:24.450 "num_base_bdevs_operational": 2, 00:17:24.450 "base_bdevs_list": [ 00:17:24.450 { 00:17:24.450 "name": "spare", 00:17:24.450 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:24.450 "is_configured": true, 00:17:24.450 "data_offset": 256, 00:17:24.450 "data_size": 7936 00:17:24.450 }, 00:17:24.450 { 00:17:24.450 "name": "BaseBdev2", 00:17:24.450 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:24.450 "is_configured": true, 00:17:24.450 "data_offset": 256, 00:17:24.450 "data_size": 7936 00:17:24.450 } 00:17:24.450 ] 00:17:24.450 }' 00:17:24.450 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.709 17:58:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.709 [2024-10-25 17:58:42.998361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.709 "name": "raid_bdev1", 00:17:24.709 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:24.709 "strip_size_kb": 0, 00:17:24.709 "state": "online", 00:17:24.709 "raid_level": "raid1", 00:17:24.709 "superblock": true, 00:17:24.709 "num_base_bdevs": 2, 00:17:24.709 "num_base_bdevs_discovered": 1, 00:17:24.709 "num_base_bdevs_operational": 1, 00:17:24.709 "base_bdevs_list": [ 00:17:24.709 { 00:17:24.709 "name": null, 00:17:24.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.709 "is_configured": false, 00:17:24.709 "data_offset": 0, 00:17:24.709 "data_size": 7936 00:17:24.709 }, 00:17:24.709 { 00:17:24.709 "name": "BaseBdev2", 00:17:24.709 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:24.709 "is_configured": true, 00:17:24.709 "data_offset": 256, 00:17:24.709 "data_size": 7936 00:17:24.709 } 00:17:24.709 ] 00:17:24.709 }' 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.709 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.277 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.277 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.277 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.277 [2024-10-25 17:58:43.453661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.277 [2024-10-25 17:58:43.453921] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.277 [2024-10-25 17:58:43.453946] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:25.277 [2024-10-25 17:58:43.453990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.277 [2024-10-25 17:58:43.473793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:25.277 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.277 17:58:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:25.277 [2024-10-25 17:58:43.476043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.215 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.215 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.215 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.216 "name": "raid_bdev1", 00:17:26.216 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:26.216 "strip_size_kb": 0, 00:17:26.216 "state": "online", 00:17:26.216 "raid_level": "raid1", 00:17:26.216 "superblock": true, 00:17:26.216 "num_base_bdevs": 2, 00:17:26.216 "num_base_bdevs_discovered": 2, 00:17:26.216 "num_base_bdevs_operational": 2, 00:17:26.216 "process": { 00:17:26.216 "type": "rebuild", 00:17:26.216 "target": "spare", 00:17:26.216 "progress": { 00:17:26.216 "blocks": 2560, 00:17:26.216 "percent": 32 00:17:26.216 } 00:17:26.216 }, 00:17:26.216 "base_bdevs_list": [ 00:17:26.216 { 00:17:26.216 "name": "spare", 00:17:26.216 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:26.216 "is_configured": true, 00:17:26.216 "data_offset": 256, 00:17:26.216 "data_size": 7936 00:17:26.216 }, 00:17:26.216 { 00:17:26.216 "name": "BaseBdev2", 00:17:26.216 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:26.216 "is_configured": true, 00:17:26.216 "data_offset": 256, 00:17:26.216 "data_size": 7936 00:17:26.216 } 00:17:26.216 ] 00:17:26.216 }' 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.216 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.216 [2024-10-25 17:58:44.639227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.476 [2024-10-25 17:58:44.682111] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.476 [2024-10-25 17:58:44.682185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.476 [2024-10-25 17:58:44.682202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.477 [2024-10-25 17:58:44.682213] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.477 "name": "raid_bdev1", 00:17:26.477 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:26.477 "strip_size_kb": 0, 00:17:26.477 "state": "online", 00:17:26.477 "raid_level": "raid1", 00:17:26.477 "superblock": true, 00:17:26.477 "num_base_bdevs": 2, 00:17:26.477 "num_base_bdevs_discovered": 1, 00:17:26.477 "num_base_bdevs_operational": 1, 00:17:26.477 "base_bdevs_list": [ 00:17:26.477 { 00:17:26.477 "name": null, 00:17:26.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.477 "is_configured": false, 00:17:26.477 "data_offset": 0, 00:17:26.477 "data_size": 7936 00:17:26.477 }, 00:17:26.477 { 00:17:26.477 "name": "BaseBdev2", 00:17:26.477 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:26.477 "is_configured": true, 00:17:26.477 "data_offset": 256, 00:17:26.477 "data_size": 7936 00:17:26.477 } 00:17:26.477 ] 00:17:26.477 }' 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.477 17:58:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.047 17:58:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.047 17:58:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.047 17:58:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.047 [2024-10-25 17:58:45.197795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.047 [2024-10-25 17:58:45.197907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.047 [2024-10-25 17:58:45.197932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:27.047 [2024-10-25 17:58:45.197945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.047 [2024-10-25 17:58:45.198504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.047 [2024-10-25 17:58:45.198542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.047 [2024-10-25 17:58:45.198649] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:27.047 [2024-10-25 17:58:45.198682] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.047 [2024-10-25 17:58:45.198695] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:27.047 [2024-10-25 17:58:45.198728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.047 [2024-10-25 17:58:45.218154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:27.047 spare 00:17:27.047 17:58:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.047 17:58:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:27.047 [2024-10-25 17:58:45.220401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.989 "name": "raid_bdev1", 00:17:27.989 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:27.989 "strip_size_kb": 0, 00:17:27.989 "state": "online", 00:17:27.989 "raid_level": "raid1", 00:17:27.989 "superblock": true, 00:17:27.989 "num_base_bdevs": 2, 00:17:27.989 "num_base_bdevs_discovered": 2, 00:17:27.989 "num_base_bdevs_operational": 2, 00:17:27.989 "process": { 00:17:27.989 "type": "rebuild", 00:17:27.989 "target": "spare", 00:17:27.989 "progress": { 00:17:27.989 "blocks": 2560, 00:17:27.989 "percent": 32 00:17:27.989 } 00:17:27.989 }, 00:17:27.989 "base_bdevs_list": [ 00:17:27.989 { 00:17:27.989 "name": "spare", 00:17:27.989 "uuid": "c8f23069-cca5-50fb-8701-47c8f34c04df", 00:17:27.989 "is_configured": true, 00:17:27.989 "data_offset": 256, 00:17:27.989 "data_size": 7936 00:17:27.989 }, 00:17:27.989 { 00:17:27.989 "name": "BaseBdev2", 00:17:27.989 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:27.989 "is_configured": true, 00:17:27.989 "data_offset": 256, 00:17:27.989 "data_size": 7936 00:17:27.989 } 00:17:27.989 ] 00:17:27.989 }' 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.989 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.989 [2024-10-25 17:58:46.343304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.249 [2024-10-25 17:58:46.426248] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.249 [2024-10-25 17:58:46.426344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.249 [2024-10-25 17:58:46.426366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.249 [2024-10-25 17:58:46.426375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.249 "name": "raid_bdev1", 00:17:28.249 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:28.249 "strip_size_kb": 0, 00:17:28.249 "state": "online", 00:17:28.249 "raid_level": "raid1", 00:17:28.249 "superblock": true, 00:17:28.249 "num_base_bdevs": 2, 00:17:28.249 "num_base_bdevs_discovered": 1, 00:17:28.249 "num_base_bdevs_operational": 1, 00:17:28.249 "base_bdevs_list": [ 00:17:28.249 { 00:17:28.249 "name": null, 00:17:28.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.249 "is_configured": false, 00:17:28.249 "data_offset": 0, 00:17:28.249 "data_size": 7936 00:17:28.249 }, 00:17:28.249 { 00:17:28.249 "name": "BaseBdev2", 00:17:28.249 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:28.249 "is_configured": true, 00:17:28.249 "data_offset": 256, 00:17:28.249 "data_size": 7936 00:17:28.249 } 00:17:28.249 ] 00:17:28.249 }' 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.249 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.507 "name": "raid_bdev1", 00:17:28.507 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:28.507 "strip_size_kb": 0, 00:17:28.507 "state": "online", 00:17:28.507 "raid_level": "raid1", 00:17:28.507 "superblock": true, 00:17:28.507 "num_base_bdevs": 2, 00:17:28.507 "num_base_bdevs_discovered": 1, 00:17:28.507 "num_base_bdevs_operational": 1, 00:17:28.507 "base_bdevs_list": [ 00:17:28.507 { 00:17:28.507 "name": null, 00:17:28.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.507 "is_configured": false, 00:17:28.507 "data_offset": 0, 00:17:28.507 "data_size": 7936 00:17:28.507 }, 00:17:28.507 { 00:17:28.507 "name": "BaseBdev2", 00:17:28.507 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:28.507 "is_configured": true, 00:17:28.507 "data_offset": 256, 00:17:28.507 "data_size": 7936 00:17:28.507 } 00:17:28.507 ] 00:17:28.507 }' 00:17:28.507 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.766 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.766 17:58:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 [2024-10-25 17:58:47.057712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.766 [2024-10-25 17:58:47.057803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.766 [2024-10-25 17:58:47.057845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:28.766 [2024-10-25 17:58:47.057870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.766 [2024-10-25 17:58:47.058409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.766 [2024-10-25 17:58:47.058441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.766 [2024-10-25 17:58:47.058542] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:28.766 [2024-10-25 17:58:47.058571] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:28.766 [2024-10-25 17:58:47.058584] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:28.766 [2024-10-25 17:58:47.058598] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:28.766 BaseBdev1 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.766 17:58:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.704 "name": "raid_bdev1", 00:17:29.704 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:29.704 "strip_size_kb": 0, 00:17:29.704 "state": "online", 00:17:29.704 "raid_level": "raid1", 00:17:29.704 "superblock": true, 00:17:29.704 "num_base_bdevs": 2, 00:17:29.704 "num_base_bdevs_discovered": 1, 00:17:29.704 "num_base_bdevs_operational": 1, 00:17:29.704 "base_bdevs_list": [ 00:17:29.704 { 00:17:29.704 "name": null, 00:17:29.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.704 "is_configured": false, 00:17:29.704 "data_offset": 0, 00:17:29.704 "data_size": 7936 00:17:29.704 }, 00:17:29.704 { 00:17:29.704 "name": "BaseBdev2", 00:17:29.704 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:29.704 "is_configured": true, 00:17:29.704 "data_offset": 256, 00:17:29.704 "data_size": 7936 00:17:29.704 } 00:17:29.704 ] 00:17:29.704 }' 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.704 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.273 "name": "raid_bdev1", 00:17:30.273 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:30.273 "strip_size_kb": 0, 00:17:30.273 "state": "online", 00:17:30.273 "raid_level": "raid1", 00:17:30.273 "superblock": true, 00:17:30.273 "num_base_bdevs": 2, 00:17:30.273 "num_base_bdevs_discovered": 1, 00:17:30.273 "num_base_bdevs_operational": 1, 00:17:30.273 "base_bdevs_list": [ 00:17:30.273 { 00:17:30.273 "name": null, 00:17:30.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.273 "is_configured": false, 00:17:30.273 "data_offset": 0, 00:17:30.273 "data_size": 7936 00:17:30.273 }, 00:17:30.273 { 00:17:30.273 "name": "BaseBdev2", 00:17:30.273 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:30.273 "is_configured": true, 00:17:30.273 "data_offset": 256, 00:17:30.273 "data_size": 7936 00:17:30.273 } 00:17:30.273 ] 00:17:30.273 }' 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.273 [2024-10-25 17:58:48.639111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.273 [2024-10-25 17:58:48.639310] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.273 [2024-10-25 17:58:48.639335] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.273 request: 00:17:30.273 { 00:17:30.273 "base_bdev": "BaseBdev1", 00:17:30.273 "raid_bdev": "raid_bdev1", 00:17:30.273 "method": "bdev_raid_add_base_bdev", 00:17:30.273 "req_id": 1 00:17:30.273 } 00:17:30.273 Got JSON-RPC error response 00:17:30.273 response: 00:17:30.273 { 00:17:30.273 "code": -22, 00:17:30.273 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:30.273 } 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.273 17:58:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.652 "name": "raid_bdev1", 00:17:31.652 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:31.652 "strip_size_kb": 0, 00:17:31.652 "state": "online", 00:17:31.652 "raid_level": "raid1", 00:17:31.652 "superblock": true, 00:17:31.652 "num_base_bdevs": 2, 00:17:31.652 "num_base_bdevs_discovered": 1, 00:17:31.652 "num_base_bdevs_operational": 1, 00:17:31.652 "base_bdevs_list": [ 00:17:31.652 { 00:17:31.652 "name": null, 00:17:31.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.652 "is_configured": false, 00:17:31.652 "data_offset": 0, 00:17:31.652 "data_size": 7936 00:17:31.652 }, 00:17:31.652 { 00:17:31.652 "name": "BaseBdev2", 00:17:31.652 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:31.652 "is_configured": true, 00:17:31.652 "data_offset": 256, 00:17:31.652 "data_size": 7936 00:17:31.652 } 00:17:31.652 ] 00:17:31.652 }' 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.652 17:58:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.912 "name": "raid_bdev1", 00:17:31.912 "uuid": "e0367cd4-e58b-4097-a1d4-51c5a3a64118", 00:17:31.912 "strip_size_kb": 0, 00:17:31.912 "state": "online", 00:17:31.912 "raid_level": "raid1", 00:17:31.912 "superblock": true, 00:17:31.912 "num_base_bdevs": 2, 00:17:31.912 "num_base_bdevs_discovered": 1, 00:17:31.912 "num_base_bdevs_operational": 1, 00:17:31.912 "base_bdevs_list": [ 00:17:31.912 { 00:17:31.912 "name": null, 00:17:31.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.912 "is_configured": false, 00:17:31.912 "data_offset": 0, 00:17:31.912 "data_size": 7936 00:17:31.912 }, 00:17:31.912 { 00:17:31.912 "name": "BaseBdev2", 00:17:31.912 "uuid": "9df49ba2-58e6-5501-ba56-6fe53afe74cc", 00:17:31.912 "is_configured": true, 00:17:31.912 "data_offset": 256, 00:17:31.912 "data_size": 7936 00:17:31.912 } 00:17:31.912 ] 00:17:31.912 }' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86404 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86404 ']' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86404 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.912 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86404 00:17:31.913 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.913 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.913 killing process with pid 86404 00:17:31.913 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86404' 00:17:31.913 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86404 00:17:31.913 Received shutdown signal, test time was about 60.000000 seconds 00:17:31.913 00:17:31.913 Latency(us) 00:17:31.913 [2024-10-25T17:58:50.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.913 [2024-10-25T17:58:50.349Z] =================================================================================================================== 00:17:31.913 [2024-10-25T17:58:50.349Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.913 [2024-10-25 17:58:50.302418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.913 17:58:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86404 00:17:31.913 [2024-10-25 17:58:50.302577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.913 [2024-10-25 17:58:50.302639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.913 [2024-10-25 17:58:50.302654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:32.485 [2024-10-25 17:58:50.674857] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.868 17:58:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:33.868 00:17:33.868 real 0m20.822s 00:17:33.868 user 0m26.885s 00:17:33.868 sys 0m3.012s 00:17:33.868 17:58:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.869 17:58:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.869 ************************************ 00:17:33.869 END TEST raid_rebuild_test_sb_4k 00:17:33.869 ************************************ 00:17:33.869 17:58:52 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:33.869 17:58:52 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:33.869 17:58:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:33.869 17:58:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.869 17:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.869 ************************************ 00:17:33.869 START TEST raid_state_function_test_sb_md_separate 00:17:33.869 ************************************ 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87107 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:33.869 Process raid pid: 87107 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87107' 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87107 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87107 ']' 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.869 17:58:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.869 [2024-10-25 17:58:52.186482] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:33.869 [2024-10-25 17:58:52.186636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.129 [2024-10-25 17:58:52.371043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.129 [2024-10-25 17:58:52.509002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.389 [2024-10-25 17:58:52.759864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.389 [2024-10-25 17:58:52.759913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.649 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.649 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:34.649 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:34.649 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 [2024-10-25 17:58:53.084041] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.649 [2024-10-25 17:58:53.084107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.649 [2024-10-25 17:58:53.084119] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.649 [2024-10-25 17:58:53.084131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.910 "name": "Existed_Raid", 00:17:34.910 "uuid": "e5442dcf-3bec-4caf-b85c-aab028ff3eb1", 00:17:34.910 "strip_size_kb": 0, 00:17:34.910 "state": "configuring", 00:17:34.910 "raid_level": "raid1", 00:17:34.910 "superblock": true, 00:17:34.910 "num_base_bdevs": 2, 00:17:34.910 "num_base_bdevs_discovered": 0, 00:17:34.910 "num_base_bdevs_operational": 2, 00:17:34.910 "base_bdevs_list": [ 00:17:34.910 { 00:17:34.910 "name": "BaseBdev1", 00:17:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.910 "is_configured": false, 00:17:34.910 "data_offset": 0, 00:17:34.910 "data_size": 0 00:17:34.910 }, 00:17:34.910 { 00:17:34.910 "name": "BaseBdev2", 00:17:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.910 "is_configured": false, 00:17:34.910 "data_offset": 0, 00:17:34.910 "data_size": 0 00:17:34.910 } 00:17:34.910 ] 00:17:34.910 }' 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.910 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 [2024-10-25 17:58:53.555252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.170 [2024-10-25 17:58:53.555305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 [2024-10-25 17:58:53.563213] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.170 [2024-10-25 17:58:53.563259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.170 [2024-10-25 17:58:53.563271] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.170 [2024-10-25 17:58:53.563284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.170 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.431 [2024-10-25 17:58:53.614780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.431 BaseBdev1 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.431 [ 00:17:35.431 { 00:17:35.431 "name": "BaseBdev1", 00:17:35.431 "aliases": [ 00:17:35.431 "d37992b5-1bf1-4271-9c67-c246ddf4ab78" 00:17:35.431 ], 00:17:35.431 "product_name": "Malloc disk", 00:17:35.431 "block_size": 4096, 00:17:35.431 "num_blocks": 8192, 00:17:35.431 "uuid": "d37992b5-1bf1-4271-9c67-c246ddf4ab78", 00:17:35.431 "md_size": 32, 00:17:35.431 "md_interleave": false, 00:17:35.431 "dif_type": 0, 00:17:35.431 "assigned_rate_limits": { 00:17:35.431 "rw_ios_per_sec": 0, 00:17:35.431 "rw_mbytes_per_sec": 0, 00:17:35.431 "r_mbytes_per_sec": 0, 00:17:35.431 "w_mbytes_per_sec": 0 00:17:35.431 }, 00:17:35.431 "claimed": true, 00:17:35.431 "claim_type": "exclusive_write", 00:17:35.431 "zoned": false, 00:17:35.431 "supported_io_types": { 00:17:35.431 "read": true, 00:17:35.431 "write": true, 00:17:35.431 "unmap": true, 00:17:35.431 "flush": true, 00:17:35.431 "reset": true, 00:17:35.431 "nvme_admin": false, 00:17:35.431 "nvme_io": false, 00:17:35.431 "nvme_io_md": false, 00:17:35.431 "write_zeroes": true, 00:17:35.431 "zcopy": true, 00:17:35.431 "get_zone_info": false, 00:17:35.431 "zone_management": false, 00:17:35.431 "zone_append": false, 00:17:35.431 "compare": false, 00:17:35.431 "compare_and_write": false, 00:17:35.431 "abort": true, 00:17:35.431 "seek_hole": false, 00:17:35.431 "seek_data": false, 00:17:35.431 "copy": true, 00:17:35.431 "nvme_iov_md": false 00:17:35.431 }, 00:17:35.431 "memory_domains": [ 00:17:35.431 { 00:17:35.431 "dma_device_id": "system", 00:17:35.431 "dma_device_type": 1 00:17:35.431 }, 00:17:35.431 { 00:17:35.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.431 "dma_device_type": 2 00:17:35.431 } 00:17:35.431 ], 00:17:35.431 "driver_specific": {} 00:17:35.431 } 00:17:35.431 ] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.431 "name": "Existed_Raid", 00:17:35.431 "uuid": "c1519d89-7c98-482c-b0cf-3a6211938f4e", 00:17:35.431 "strip_size_kb": 0, 00:17:35.431 "state": "configuring", 00:17:35.431 "raid_level": "raid1", 00:17:35.431 "superblock": true, 00:17:35.431 "num_base_bdevs": 2, 00:17:35.431 "num_base_bdevs_discovered": 1, 00:17:35.431 "num_base_bdevs_operational": 2, 00:17:35.431 "base_bdevs_list": [ 00:17:35.431 { 00:17:35.431 "name": "BaseBdev1", 00:17:35.431 "uuid": "d37992b5-1bf1-4271-9c67-c246ddf4ab78", 00:17:35.431 "is_configured": true, 00:17:35.431 "data_offset": 256, 00:17:35.431 "data_size": 7936 00:17:35.431 }, 00:17:35.431 { 00:17:35.431 "name": "BaseBdev2", 00:17:35.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.431 "is_configured": false, 00:17:35.431 "data_offset": 0, 00:17:35.431 "data_size": 0 00:17:35.431 } 00:17:35.431 ] 00:17:35.431 }' 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.431 17:58:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 [2024-10-25 17:58:54.161988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.002 [2024-10-25 17:58:54.162054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 [2024-10-25 17:58:54.174028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.002 [2024-10-25 17:58:54.176180] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.002 [2024-10-25 17:58:54.176232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.002 "name": "Existed_Raid", 00:17:36.002 "uuid": "2d8a2668-b3c1-408c-ad26-c3131631f627", 00:17:36.002 "strip_size_kb": 0, 00:17:36.002 "state": "configuring", 00:17:36.002 "raid_level": "raid1", 00:17:36.002 "superblock": true, 00:17:36.002 "num_base_bdevs": 2, 00:17:36.002 "num_base_bdevs_discovered": 1, 00:17:36.002 "num_base_bdevs_operational": 2, 00:17:36.002 "base_bdevs_list": [ 00:17:36.002 { 00:17:36.002 "name": "BaseBdev1", 00:17:36.002 "uuid": "d37992b5-1bf1-4271-9c67-c246ddf4ab78", 00:17:36.002 "is_configured": true, 00:17:36.002 "data_offset": 256, 00:17:36.002 "data_size": 7936 00:17:36.002 }, 00:17:36.002 { 00:17:36.002 "name": "BaseBdev2", 00:17:36.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.002 "is_configured": false, 00:17:36.002 "data_offset": 0, 00:17:36.002 "data_size": 0 00:17:36.002 } 00:17:36.002 ] 00:17:36.002 }' 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.002 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.270 [2024-10-25 17:58:54.694675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.270 [2024-10-25 17:58:54.694970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:36.270 [2024-10-25 17:58:54.694991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.270 [2024-10-25 17:58:54.695090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:36.270 [2024-10-25 17:58:54.695242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:36.270 [2024-10-25 17:58:54.695259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:36.270 [2024-10-25 17:58:54.695378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.270 BaseBdev2 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.270 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.550 [ 00:17:36.550 { 00:17:36.550 "name": "BaseBdev2", 00:17:36.550 "aliases": [ 00:17:36.550 "f82b394e-d2b8-45cb-8cc7-87d857a66be4" 00:17:36.550 ], 00:17:36.550 "product_name": "Malloc disk", 00:17:36.550 "block_size": 4096, 00:17:36.550 "num_blocks": 8192, 00:17:36.550 "uuid": "f82b394e-d2b8-45cb-8cc7-87d857a66be4", 00:17:36.550 "md_size": 32, 00:17:36.550 "md_interleave": false, 00:17:36.550 "dif_type": 0, 00:17:36.550 "assigned_rate_limits": { 00:17:36.550 "rw_ios_per_sec": 0, 00:17:36.550 "rw_mbytes_per_sec": 0, 00:17:36.550 "r_mbytes_per_sec": 0, 00:17:36.550 "w_mbytes_per_sec": 0 00:17:36.550 }, 00:17:36.550 "claimed": true, 00:17:36.550 "claim_type": "exclusive_write", 00:17:36.550 "zoned": false, 00:17:36.550 "supported_io_types": { 00:17:36.550 "read": true, 00:17:36.550 "write": true, 00:17:36.550 "unmap": true, 00:17:36.550 "flush": true, 00:17:36.550 "reset": true, 00:17:36.550 "nvme_admin": false, 00:17:36.550 "nvme_io": false, 00:17:36.550 "nvme_io_md": false, 00:17:36.550 "write_zeroes": true, 00:17:36.550 "zcopy": true, 00:17:36.550 "get_zone_info": false, 00:17:36.550 "zone_management": false, 00:17:36.550 "zone_append": false, 00:17:36.550 "compare": false, 00:17:36.550 "compare_and_write": false, 00:17:36.550 "abort": true, 00:17:36.550 "seek_hole": false, 00:17:36.550 "seek_data": false, 00:17:36.550 "copy": true, 00:17:36.550 "nvme_iov_md": false 00:17:36.550 }, 00:17:36.550 "memory_domains": [ 00:17:36.550 { 00:17:36.550 "dma_device_id": "system", 00:17:36.550 "dma_device_type": 1 00:17:36.550 }, 00:17:36.550 { 00:17:36.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.550 "dma_device_type": 2 00:17:36.550 } 00:17:36.550 ], 00:17:36.550 "driver_specific": {} 00:17:36.550 } 00:17:36.550 ] 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.550 "name": "Existed_Raid", 00:17:36.550 "uuid": "2d8a2668-b3c1-408c-ad26-c3131631f627", 00:17:36.550 "strip_size_kb": 0, 00:17:36.550 "state": "online", 00:17:36.550 "raid_level": "raid1", 00:17:36.550 "superblock": true, 00:17:36.550 "num_base_bdevs": 2, 00:17:36.550 "num_base_bdevs_discovered": 2, 00:17:36.550 "num_base_bdevs_operational": 2, 00:17:36.550 "base_bdevs_list": [ 00:17:36.550 { 00:17:36.550 "name": "BaseBdev1", 00:17:36.550 "uuid": "d37992b5-1bf1-4271-9c67-c246ddf4ab78", 00:17:36.550 "is_configured": true, 00:17:36.550 "data_offset": 256, 00:17:36.550 "data_size": 7936 00:17:36.550 }, 00:17:36.550 { 00:17:36.550 "name": "BaseBdev2", 00:17:36.550 "uuid": "f82b394e-d2b8-45cb-8cc7-87d857a66be4", 00:17:36.550 "is_configured": true, 00:17:36.550 "data_offset": 256, 00:17:36.550 "data_size": 7936 00:17:36.550 } 00:17:36.550 ] 00:17:36.550 }' 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.550 17:58:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.811 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.811 [2024-10-25 17:58:55.226338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.071 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.071 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.071 "name": "Existed_Raid", 00:17:37.071 "aliases": [ 00:17:37.071 "2d8a2668-b3c1-408c-ad26-c3131631f627" 00:17:37.071 ], 00:17:37.071 "product_name": "Raid Volume", 00:17:37.071 "block_size": 4096, 00:17:37.071 "num_blocks": 7936, 00:17:37.071 "uuid": "2d8a2668-b3c1-408c-ad26-c3131631f627", 00:17:37.071 "md_size": 32, 00:17:37.071 "md_interleave": false, 00:17:37.071 "dif_type": 0, 00:17:37.071 "assigned_rate_limits": { 00:17:37.071 "rw_ios_per_sec": 0, 00:17:37.071 "rw_mbytes_per_sec": 0, 00:17:37.071 "r_mbytes_per_sec": 0, 00:17:37.071 "w_mbytes_per_sec": 0 00:17:37.071 }, 00:17:37.071 "claimed": false, 00:17:37.071 "zoned": false, 00:17:37.071 "supported_io_types": { 00:17:37.071 "read": true, 00:17:37.071 "write": true, 00:17:37.071 "unmap": false, 00:17:37.071 "flush": false, 00:17:37.071 "reset": true, 00:17:37.071 "nvme_admin": false, 00:17:37.071 "nvme_io": false, 00:17:37.071 "nvme_io_md": false, 00:17:37.071 "write_zeroes": true, 00:17:37.071 "zcopy": false, 00:17:37.071 "get_zone_info": false, 00:17:37.071 "zone_management": false, 00:17:37.071 "zone_append": false, 00:17:37.071 "compare": false, 00:17:37.071 "compare_and_write": false, 00:17:37.071 "abort": false, 00:17:37.071 "seek_hole": false, 00:17:37.071 "seek_data": false, 00:17:37.071 "copy": false, 00:17:37.071 "nvme_iov_md": false 00:17:37.071 }, 00:17:37.071 "memory_domains": [ 00:17:37.071 { 00:17:37.071 "dma_device_id": "system", 00:17:37.071 "dma_device_type": 1 00:17:37.071 }, 00:17:37.071 { 00:17:37.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.071 "dma_device_type": 2 00:17:37.071 }, 00:17:37.071 { 00:17:37.071 "dma_device_id": "system", 00:17:37.071 "dma_device_type": 1 00:17:37.071 }, 00:17:37.071 { 00:17:37.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.071 "dma_device_type": 2 00:17:37.071 } 00:17:37.071 ], 00:17:37.072 "driver_specific": { 00:17:37.072 "raid": { 00:17:37.072 "uuid": "2d8a2668-b3c1-408c-ad26-c3131631f627", 00:17:37.072 "strip_size_kb": 0, 00:17:37.072 "state": "online", 00:17:37.072 "raid_level": "raid1", 00:17:37.072 "superblock": true, 00:17:37.072 "num_base_bdevs": 2, 00:17:37.072 "num_base_bdevs_discovered": 2, 00:17:37.072 "num_base_bdevs_operational": 2, 00:17:37.072 "base_bdevs_list": [ 00:17:37.072 { 00:17:37.072 "name": "BaseBdev1", 00:17:37.072 "uuid": "d37992b5-1bf1-4271-9c67-c246ddf4ab78", 00:17:37.072 "is_configured": true, 00:17:37.072 "data_offset": 256, 00:17:37.072 "data_size": 7936 00:17:37.072 }, 00:17:37.072 { 00:17:37.072 "name": "BaseBdev2", 00:17:37.072 "uuid": "f82b394e-d2b8-45cb-8cc7-87d857a66be4", 00:17:37.072 "is_configured": true, 00:17:37.072 "data_offset": 256, 00:17:37.072 "data_size": 7936 00:17:37.072 } 00:17:37.072 ] 00:17:37.072 } 00:17:37.072 } 00:17:37.072 }' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:37.072 BaseBdev2' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.072 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 [2024-10-25 17:58:55.493687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.332 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.333 "name": "Existed_Raid", 00:17:37.333 "uuid": "2d8a2668-b3c1-408c-ad26-c3131631f627", 00:17:37.333 "strip_size_kb": 0, 00:17:37.333 "state": "online", 00:17:37.333 "raid_level": "raid1", 00:17:37.333 "superblock": true, 00:17:37.333 "num_base_bdevs": 2, 00:17:37.333 "num_base_bdevs_discovered": 1, 00:17:37.333 "num_base_bdevs_operational": 1, 00:17:37.333 "base_bdevs_list": [ 00:17:37.333 { 00:17:37.333 "name": null, 00:17:37.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.333 "is_configured": false, 00:17:37.333 "data_offset": 0, 00:17:37.333 "data_size": 7936 00:17:37.333 }, 00:17:37.333 { 00:17:37.333 "name": "BaseBdev2", 00:17:37.333 "uuid": "f82b394e-d2b8-45cb-8cc7-87d857a66be4", 00:17:37.333 "is_configured": true, 00:17:37.333 "data_offset": 256, 00:17:37.333 "data_size": 7936 00:17:37.333 } 00:17:37.333 ] 00:17:37.333 }' 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.333 17:58:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 [2024-10-25 17:58:56.105008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:37.902 [2024-10-25 17:58:56.105137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.902 [2024-10-25 17:58:56.231298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.902 [2024-10-25 17:58:56.231355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.902 [2024-10-25 17:58:56.231368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87107 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87107 ']' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87107 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87107 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.902 killing process with pid 87107 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87107' 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87107 00:17:37.902 [2024-10-25 17:58:56.327137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.902 17:58:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87107 00:17:38.162 [2024-10-25 17:58:56.346206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.540 17:58:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:39.540 00:17:39.540 real 0m5.561s 00:17:39.540 user 0m7.930s 00:17:39.540 sys 0m0.988s 00:17:39.540 17:58:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:39.540 17:58:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.541 ************************************ 00:17:39.541 END TEST raid_state_function_test_sb_md_separate 00:17:39.541 ************************************ 00:17:39.541 17:58:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:39.541 17:58:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:39.541 17:58:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:39.541 17:58:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.541 ************************************ 00:17:39.541 START TEST raid_superblock_test_md_separate 00:17:39.541 ************************************ 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87358 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87358 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87358 ']' 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.541 17:58:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.541 [2024-10-25 17:58:57.811768] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:39.541 [2024-10-25 17:58:57.811917] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87358 ] 00:17:39.800 [2024-10-25 17:58:57.994371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.800 [2024-10-25 17:58:58.132380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.059 [2024-10-25 17:58:58.374767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.059 [2024-10-25 17:58:58.374853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.318 malloc1 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.318 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.577 [2024-10-25 17:58:58.759111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.577 [2024-10-25 17:58:58.759186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.577 [2024-10-25 17:58:58.759213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:40.577 [2024-10-25 17:58:58.759226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.577 [2024-10-25 17:58:58.761492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.577 [2024-10-25 17:58:58.761545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.577 pt1 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.578 malloc2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.578 [2024-10-25 17:58:58.821997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.578 [2024-10-25 17:58:58.822075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.578 [2024-10-25 17:58:58.822105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:40.578 [2024-10-25 17:58:58.822119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.578 [2024-10-25 17:58:58.824468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.578 [2024-10-25 17:58:58.824529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.578 pt2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.578 [2024-10-25 17:58:58.834016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.578 [2024-10-25 17:58:58.836255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.578 [2024-10-25 17:58:58.836492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:40.578 [2024-10-25 17:58:58.836520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:40.578 [2024-10-25 17:58:58.836634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:40.578 [2024-10-25 17:58:58.836798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:40.578 [2024-10-25 17:58:58.836821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:40.578 [2024-10-25 17:58:58.836988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.578 "name": "raid_bdev1", 00:17:40.578 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:40.578 "strip_size_kb": 0, 00:17:40.578 "state": "online", 00:17:40.578 "raid_level": "raid1", 00:17:40.578 "superblock": true, 00:17:40.578 "num_base_bdevs": 2, 00:17:40.578 "num_base_bdevs_discovered": 2, 00:17:40.578 "num_base_bdevs_operational": 2, 00:17:40.578 "base_bdevs_list": [ 00:17:40.578 { 00:17:40.578 "name": "pt1", 00:17:40.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.578 "is_configured": true, 00:17:40.578 "data_offset": 256, 00:17:40.578 "data_size": 7936 00:17:40.578 }, 00:17:40.578 { 00:17:40.578 "name": "pt2", 00:17:40.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.578 "is_configured": true, 00:17:40.578 "data_offset": 256, 00:17:40.578 "data_size": 7936 00:17:40.578 } 00:17:40.578 ] 00:17:40.578 }' 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.578 17:58:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:41.146 [2024-10-25 17:58:59.338003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:41.146 "name": "raid_bdev1", 00:17:41.146 "aliases": [ 00:17:41.146 "53349137-875c-4bb3-afd6-3540fc0eb6ee" 00:17:41.146 ], 00:17:41.146 "product_name": "Raid Volume", 00:17:41.146 "block_size": 4096, 00:17:41.146 "num_blocks": 7936, 00:17:41.146 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:41.146 "md_size": 32, 00:17:41.146 "md_interleave": false, 00:17:41.146 "dif_type": 0, 00:17:41.146 "assigned_rate_limits": { 00:17:41.146 "rw_ios_per_sec": 0, 00:17:41.146 "rw_mbytes_per_sec": 0, 00:17:41.146 "r_mbytes_per_sec": 0, 00:17:41.146 "w_mbytes_per_sec": 0 00:17:41.146 }, 00:17:41.146 "claimed": false, 00:17:41.146 "zoned": false, 00:17:41.146 "supported_io_types": { 00:17:41.146 "read": true, 00:17:41.146 "write": true, 00:17:41.146 "unmap": false, 00:17:41.146 "flush": false, 00:17:41.146 "reset": true, 00:17:41.146 "nvme_admin": false, 00:17:41.146 "nvme_io": false, 00:17:41.146 "nvme_io_md": false, 00:17:41.146 "write_zeroes": true, 00:17:41.146 "zcopy": false, 00:17:41.146 "get_zone_info": false, 00:17:41.146 "zone_management": false, 00:17:41.146 "zone_append": false, 00:17:41.146 "compare": false, 00:17:41.146 "compare_and_write": false, 00:17:41.146 "abort": false, 00:17:41.146 "seek_hole": false, 00:17:41.146 "seek_data": false, 00:17:41.146 "copy": false, 00:17:41.146 "nvme_iov_md": false 00:17:41.146 }, 00:17:41.146 "memory_domains": [ 00:17:41.146 { 00:17:41.146 "dma_device_id": "system", 00:17:41.146 "dma_device_type": 1 00:17:41.146 }, 00:17:41.146 { 00:17:41.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.146 "dma_device_type": 2 00:17:41.146 }, 00:17:41.146 { 00:17:41.146 "dma_device_id": "system", 00:17:41.146 "dma_device_type": 1 00:17:41.146 }, 00:17:41.146 { 00:17:41.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.146 "dma_device_type": 2 00:17:41.146 } 00:17:41.146 ], 00:17:41.146 "driver_specific": { 00:17:41.146 "raid": { 00:17:41.146 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:41.146 "strip_size_kb": 0, 00:17:41.146 "state": "online", 00:17:41.146 "raid_level": "raid1", 00:17:41.146 "superblock": true, 00:17:41.146 "num_base_bdevs": 2, 00:17:41.146 "num_base_bdevs_discovered": 2, 00:17:41.146 "num_base_bdevs_operational": 2, 00:17:41.146 "base_bdevs_list": [ 00:17:41.146 { 00:17:41.146 "name": "pt1", 00:17:41.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.146 "is_configured": true, 00:17:41.146 "data_offset": 256, 00:17:41.146 "data_size": 7936 00:17:41.146 }, 00:17:41.146 { 00:17:41.146 "name": "pt2", 00:17:41.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.146 "is_configured": true, 00:17:41.146 "data_offset": 256, 00:17:41.146 "data_size": 7936 00:17:41.146 } 00:17:41.146 ] 00:17:41.146 } 00:17:41.146 } 00:17:41.146 }' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:41.146 pt2' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.146 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.147 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:41.147 [2024-10-25 17:58:59.581574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=53349137-875c-4bb3-afd6-3540fc0eb6ee 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 53349137-875c-4bb3-afd6-3540fc0eb6ee ']' 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 [2024-10-25 17:58:59.629127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.406 [2024-10-25 17:58:59.629173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.406 [2024-10-25 17:58:59.629292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.406 [2024-10-25 17:58:59.629362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.406 [2024-10-25 17:58:59.629377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.407 [2024-10-25 17:58:59.773015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:41.407 [2024-10-25 17:58:59.775195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:41.407 [2024-10-25 17:58:59.775320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:41.407 [2024-10-25 17:58:59.775396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:41.407 [2024-10-25 17:58:59.775418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.407 [2024-10-25 17:58:59.775432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:41.407 request: 00:17:41.407 { 00:17:41.407 "name": "raid_bdev1", 00:17:41.407 "raid_level": "raid1", 00:17:41.407 "base_bdevs": [ 00:17:41.407 "malloc1", 00:17:41.407 "malloc2" 00:17:41.407 ], 00:17:41.407 "superblock": false, 00:17:41.407 "method": "bdev_raid_create", 00:17:41.407 "req_id": 1 00:17:41.407 } 00:17:41.407 Got JSON-RPC error response 00:17:41.407 response: 00:17:41.407 { 00:17:41.407 "code": -17, 00:17:41.407 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:41.407 } 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.407 [2024-10-25 17:58:59.833006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.407 [2024-10-25 17:58:59.833078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.407 [2024-10-25 17:58:59.833097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:41.407 [2024-10-25 17:58:59.833114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.407 [2024-10-25 17:58:59.835453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.407 [2024-10-25 17:58:59.835501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.407 [2024-10-25 17:58:59.835562] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:41.407 [2024-10-25 17:58:59.835638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.407 pt1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.407 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.666 "name": "raid_bdev1", 00:17:41.666 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:41.666 "strip_size_kb": 0, 00:17:41.666 "state": "configuring", 00:17:41.666 "raid_level": "raid1", 00:17:41.666 "superblock": true, 00:17:41.666 "num_base_bdevs": 2, 00:17:41.666 "num_base_bdevs_discovered": 1, 00:17:41.666 "num_base_bdevs_operational": 2, 00:17:41.666 "base_bdevs_list": [ 00:17:41.666 { 00:17:41.666 "name": "pt1", 00:17:41.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.666 "is_configured": true, 00:17:41.666 "data_offset": 256, 00:17:41.666 "data_size": 7936 00:17:41.666 }, 00:17:41.666 { 00:17:41.666 "name": null, 00:17:41.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.666 "is_configured": false, 00:17:41.666 "data_offset": 256, 00:17:41.666 "data_size": 7936 00:17:41.666 } 00:17:41.666 ] 00:17:41.666 }' 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.666 17:58:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 [2024-10-25 17:59:00.344885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.925 [2024-10-25 17:59:00.344981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.925 [2024-10-25 17:59:00.345007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:41.925 [2024-10-25 17:59:00.345021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.925 [2024-10-25 17:59:00.345300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.925 [2024-10-25 17:59:00.345328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.925 [2024-10-25 17:59:00.345391] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.925 [2024-10-25 17:59:00.345418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.925 [2024-10-25 17:59:00.345554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:41.925 [2024-10-25 17:59:00.345575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:41.925 [2024-10-25 17:59:00.345657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:41.925 [2024-10-25 17:59:00.345804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:41.925 [2024-10-25 17:59:00.345821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:41.925 [2024-10-25 17:59:00.345957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.925 pt2 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.185 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.185 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.185 "name": "raid_bdev1", 00:17:42.185 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:42.185 "strip_size_kb": 0, 00:17:42.185 "state": "online", 00:17:42.185 "raid_level": "raid1", 00:17:42.185 "superblock": true, 00:17:42.185 "num_base_bdevs": 2, 00:17:42.185 "num_base_bdevs_discovered": 2, 00:17:42.185 "num_base_bdevs_operational": 2, 00:17:42.185 "base_bdevs_list": [ 00:17:42.185 { 00:17:42.185 "name": "pt1", 00:17:42.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.185 "is_configured": true, 00:17:42.185 "data_offset": 256, 00:17:42.185 "data_size": 7936 00:17:42.185 }, 00:17:42.185 { 00:17:42.185 "name": "pt2", 00:17:42.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.185 "is_configured": true, 00:17:42.185 "data_offset": 256, 00:17:42.185 "data_size": 7936 00:17:42.185 } 00:17:42.185 ] 00:17:42.185 }' 00:17:42.185 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.185 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:42.445 [2024-10-25 17:59:00.844418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.445 17:59:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.725 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:42.725 "name": "raid_bdev1", 00:17:42.725 "aliases": [ 00:17:42.725 "53349137-875c-4bb3-afd6-3540fc0eb6ee" 00:17:42.725 ], 00:17:42.725 "product_name": "Raid Volume", 00:17:42.725 "block_size": 4096, 00:17:42.725 "num_blocks": 7936, 00:17:42.725 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:42.725 "md_size": 32, 00:17:42.725 "md_interleave": false, 00:17:42.725 "dif_type": 0, 00:17:42.725 "assigned_rate_limits": { 00:17:42.725 "rw_ios_per_sec": 0, 00:17:42.725 "rw_mbytes_per_sec": 0, 00:17:42.725 "r_mbytes_per_sec": 0, 00:17:42.725 "w_mbytes_per_sec": 0 00:17:42.725 }, 00:17:42.725 "claimed": false, 00:17:42.725 "zoned": false, 00:17:42.725 "supported_io_types": { 00:17:42.725 "read": true, 00:17:42.725 "write": true, 00:17:42.725 "unmap": false, 00:17:42.725 "flush": false, 00:17:42.725 "reset": true, 00:17:42.725 "nvme_admin": false, 00:17:42.725 "nvme_io": false, 00:17:42.725 "nvme_io_md": false, 00:17:42.725 "write_zeroes": true, 00:17:42.725 "zcopy": false, 00:17:42.725 "get_zone_info": false, 00:17:42.725 "zone_management": false, 00:17:42.725 "zone_append": false, 00:17:42.725 "compare": false, 00:17:42.725 "compare_and_write": false, 00:17:42.725 "abort": false, 00:17:42.725 "seek_hole": false, 00:17:42.725 "seek_data": false, 00:17:42.725 "copy": false, 00:17:42.725 "nvme_iov_md": false 00:17:42.725 }, 00:17:42.725 "memory_domains": [ 00:17:42.725 { 00:17:42.725 "dma_device_id": "system", 00:17:42.725 "dma_device_type": 1 00:17:42.725 }, 00:17:42.725 { 00:17:42.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.725 "dma_device_type": 2 00:17:42.725 }, 00:17:42.725 { 00:17:42.725 "dma_device_id": "system", 00:17:42.725 "dma_device_type": 1 00:17:42.725 }, 00:17:42.725 { 00:17:42.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.725 "dma_device_type": 2 00:17:42.725 } 00:17:42.725 ], 00:17:42.725 "driver_specific": { 00:17:42.725 "raid": { 00:17:42.725 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:42.725 "strip_size_kb": 0, 00:17:42.725 "state": "online", 00:17:42.725 "raid_level": "raid1", 00:17:42.725 "superblock": true, 00:17:42.725 "num_base_bdevs": 2, 00:17:42.725 "num_base_bdevs_discovered": 2, 00:17:42.725 "num_base_bdevs_operational": 2, 00:17:42.725 "base_bdevs_list": [ 00:17:42.725 { 00:17:42.725 "name": "pt1", 00:17:42.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.725 "is_configured": true, 00:17:42.725 "data_offset": 256, 00:17:42.725 "data_size": 7936 00:17:42.725 }, 00:17:42.725 { 00:17:42.725 "name": "pt2", 00:17:42.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.725 "is_configured": true, 00:17:42.725 "data_offset": 256, 00:17:42.725 "data_size": 7936 00:17:42.725 } 00:17:42.725 ] 00:17:42.725 } 00:17:42.725 } 00:17:42.725 }' 00:17:42.725 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.725 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:42.725 pt2' 00:17:42.726 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.726 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:42.726 17:59:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:42.726 [2024-10-25 17:59:01.092038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 53349137-875c-4bb3-afd6-3540fc0eb6ee '!=' 53349137-875c-4bb3-afd6-3540fc0eb6ee ']' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.726 [2024-10-25 17:59:01.135673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.726 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.985 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.985 "name": "raid_bdev1", 00:17:42.985 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:42.985 "strip_size_kb": 0, 00:17:42.985 "state": "online", 00:17:42.985 "raid_level": "raid1", 00:17:42.985 "superblock": true, 00:17:42.985 "num_base_bdevs": 2, 00:17:42.985 "num_base_bdevs_discovered": 1, 00:17:42.985 "num_base_bdevs_operational": 1, 00:17:42.985 "base_bdevs_list": [ 00:17:42.985 { 00:17:42.985 "name": null, 00:17:42.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.985 "is_configured": false, 00:17:42.985 "data_offset": 0, 00:17:42.985 "data_size": 7936 00:17:42.985 }, 00:17:42.985 { 00:17:42.985 "name": "pt2", 00:17:42.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.985 "is_configured": true, 00:17:42.985 "data_offset": 256, 00:17:42.985 "data_size": 7936 00:17:42.985 } 00:17:42.985 ] 00:17:42.985 }' 00:17:42.985 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.985 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.244 [2024-10-25 17:59:01.598917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.244 [2024-10-25 17:59:01.598955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.244 [2024-10-25 17:59:01.599051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.244 [2024-10-25 17:59:01.599105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.244 [2024-10-25 17:59:01.599118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:43.244 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.245 [2024-10-25 17:59:01.662845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.245 [2024-10-25 17:59:01.662932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.245 [2024-10-25 17:59:01.662955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:43.245 [2024-10-25 17:59:01.662968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.245 [2024-10-25 17:59:01.665358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.245 [2024-10-25 17:59:01.665407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.245 [2024-10-25 17:59:01.665474] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:43.245 [2024-10-25 17:59:01.665539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.245 [2024-10-25 17:59:01.665641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:43.245 [2024-10-25 17:59:01.665657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.245 [2024-10-25 17:59:01.665748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:43.245 [2024-10-25 17:59:01.665914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:43.245 [2024-10-25 17:59:01.665932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:43.245 [2024-10-25 17:59:01.666057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.245 pt2 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.245 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.504 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.504 "name": "raid_bdev1", 00:17:43.504 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:43.504 "strip_size_kb": 0, 00:17:43.504 "state": "online", 00:17:43.504 "raid_level": "raid1", 00:17:43.504 "superblock": true, 00:17:43.504 "num_base_bdevs": 2, 00:17:43.504 "num_base_bdevs_discovered": 1, 00:17:43.504 "num_base_bdevs_operational": 1, 00:17:43.504 "base_bdevs_list": [ 00:17:43.504 { 00:17:43.504 "name": null, 00:17:43.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.504 "is_configured": false, 00:17:43.504 "data_offset": 256, 00:17:43.504 "data_size": 7936 00:17:43.504 }, 00:17:43.504 { 00:17:43.504 "name": "pt2", 00:17:43.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.504 "is_configured": true, 00:17:43.504 "data_offset": 256, 00:17:43.504 "data_size": 7936 00:17:43.504 } 00:17:43.504 ] 00:17:43.504 }' 00:17:43.504 17:59:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.504 17:59:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.765 [2024-10-25 17:59:02.137977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.765 [2024-10-25 17:59:02.138020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.765 [2024-10-25 17:59:02.138118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.765 [2024-10-25 17:59:02.138178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.765 [2024-10-25 17:59:02.138189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.765 [2024-10-25 17:59:02.193964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:43.765 [2024-10-25 17:59:02.194037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.765 [2024-10-25 17:59:02.194060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:43.765 [2024-10-25 17:59:02.194071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.765 [2024-10-25 17:59:02.196404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.765 [2024-10-25 17:59:02.196446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:43.765 [2024-10-25 17:59:02.196525] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:43.765 [2024-10-25 17:59:02.196580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.765 [2024-10-25 17:59:02.196743] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:43.765 [2024-10-25 17:59:02.196764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.765 [2024-10-25 17:59:02.196788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:43.765 [2024-10-25 17:59:02.196900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.765 [2024-10-25 17:59:02.196993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:43.765 [2024-10-25 17:59:02.197007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.765 [2024-10-25 17:59:02.197096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:43.765 [2024-10-25 17:59:02.197225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:43.765 [2024-10-25 17:59:02.197240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:43.765 [2024-10-25 17:59:02.197361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.765 pt1 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.765 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.024 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.024 "name": "raid_bdev1", 00:17:44.024 "uuid": "53349137-875c-4bb3-afd6-3540fc0eb6ee", 00:17:44.024 "strip_size_kb": 0, 00:17:44.024 "state": "online", 00:17:44.024 "raid_level": "raid1", 00:17:44.024 "superblock": true, 00:17:44.024 "num_base_bdevs": 2, 00:17:44.024 "num_base_bdevs_discovered": 1, 00:17:44.024 "num_base_bdevs_operational": 1, 00:17:44.024 "base_bdevs_list": [ 00:17:44.024 { 00:17:44.024 "name": null, 00:17:44.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.024 "is_configured": false, 00:17:44.024 "data_offset": 256, 00:17:44.024 "data_size": 7936 00:17:44.024 }, 00:17:44.024 { 00:17:44.025 "name": "pt2", 00:17:44.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.025 "is_configured": true, 00:17:44.025 "data_offset": 256, 00:17:44.025 "data_size": 7936 00:17:44.025 } 00:17:44.025 ] 00:17:44.025 }' 00:17:44.025 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.025 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.283 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:44.283 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:44.283 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.283 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.283 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:44.541 [2024-10-25 17:59:02.733331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 53349137-875c-4bb3-afd6-3540fc0eb6ee '!=' 53349137-875c-4bb3-afd6-3540fc0eb6ee ']' 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87358 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87358 ']' 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87358 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87358 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.541 killing process with pid 87358 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87358' 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87358 00:17:44.541 [2024-10-25 17:59:02.818049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.541 [2024-10-25 17:59:02.818174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.541 17:59:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87358 00:17:44.541 [2024-10-25 17:59:02.818236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.541 [2024-10-25 17:59:02.818256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:44.800 [2024-10-25 17:59:03.090227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.179 17:59:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:46.179 00:17:46.179 real 0m6.696s 00:17:46.179 user 0m10.086s 00:17:46.179 sys 0m1.172s 00:17:46.179 17:59:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.179 17:59:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.179 ************************************ 00:17:46.179 END TEST raid_superblock_test_md_separate 00:17:46.179 ************************************ 00:17:46.179 17:59:04 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:46.179 17:59:04 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:46.179 17:59:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:46.179 17:59:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.179 17:59:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.179 ************************************ 00:17:46.179 START TEST raid_rebuild_test_sb_md_separate 00:17:46.179 ************************************ 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87688 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87688 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87688 ']' 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.179 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.180 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.180 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.180 17:59:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.180 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:46.180 Zero copy mechanism will not be used. 00:17:46.180 [2024-10-25 17:59:04.573705] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:46.180 [2024-10-25 17:59:04.573863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87688 ] 00:17:46.439 [2024-10-25 17:59:04.752004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.699 [2024-10-25 17:59:04.892810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.699 [2024-10-25 17:59:05.127978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.699 [2024-10-25 17:59:05.128054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.268 BaseBdev1_malloc 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.268 [2024-10-25 17:59:05.562358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.268 [2024-10-25 17:59:05.562426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.268 [2024-10-25 17:59:05.562452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.268 [2024-10-25 17:59:05.562471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.268 [2024-10-25 17:59:05.564751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.268 [2024-10-25 17:59:05.564796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.268 BaseBdev1 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.268 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.268 BaseBdev2_malloc 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.269 [2024-10-25 17:59:05.614060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:47.269 [2024-10-25 17:59:05.614148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.269 [2024-10-25 17:59:05.614178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.269 [2024-10-25 17:59:05.614194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.269 [2024-10-25 17:59:05.616598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.269 [2024-10-25 17:59:05.616646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:47.269 BaseBdev2 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.269 spare_malloc 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.269 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 spare_delay 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 [2024-10-25 17:59:05.722061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.528 [2024-10-25 17:59:05.722142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.528 [2024-10-25 17:59:05.722173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:47.528 [2024-10-25 17:59:05.722193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.528 [2024-10-25 17:59:05.724854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.528 [2024-10-25 17:59:05.724915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.528 spare 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 [2024-10-25 17:59:05.734114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.528 [2024-10-25 17:59:05.736764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.528 [2024-10-25 17:59:05.737073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.528 [2024-10-25 17:59:05.737106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.528 [2024-10-25 17:59:05.737234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.528 [2024-10-25 17:59:05.737441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.528 [2024-10-25 17:59:05.737469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.528 [2024-10-25 17:59:05.737654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.528 "name": "raid_bdev1", 00:17:47.528 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:47.528 "strip_size_kb": 0, 00:17:47.528 "state": "online", 00:17:47.528 "raid_level": "raid1", 00:17:47.528 "superblock": true, 00:17:47.528 "num_base_bdevs": 2, 00:17:47.528 "num_base_bdevs_discovered": 2, 00:17:47.528 "num_base_bdevs_operational": 2, 00:17:47.528 "base_bdevs_list": [ 00:17:47.528 { 00:17:47.528 "name": "BaseBdev1", 00:17:47.528 "uuid": "b6ab0f93-3c9f-507e-b541-05053e9f5787", 00:17:47.528 "is_configured": true, 00:17:47.528 "data_offset": 256, 00:17:47.528 "data_size": 7936 00:17:47.528 }, 00:17:47.528 { 00:17:47.528 "name": "BaseBdev2", 00:17:47.528 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:47.528 "is_configured": true, 00:17:47.528 "data_offset": 256, 00:17:47.528 "data_size": 7936 00:17:47.528 } 00:17:47.528 ] 00:17:47.528 }' 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.528 17:59:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:47.786 [2024-10-25 17:59:06.137845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.786 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:48.045 [2024-10-25 17:59:06.453117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:48.045 /dev/nbd0 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.304 1+0 records in 00:17:48.304 1+0 records out 00:17:48.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558329 s, 7.3 MB/s 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:48.304 17:59:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:48.959 7936+0 records in 00:17:48.959 7936+0 records out 00:17:48.959 32505856 bytes (33 MB, 31 MiB) copied, 0.80412 s, 40.4 MB/s 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.959 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:49.241 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.241 [2024-10-25 17:59:07.608306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.242 [2024-10-25 17:59:07.620431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.242 "name": "raid_bdev1", 00:17:49.242 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:49.242 "strip_size_kb": 0, 00:17:49.242 "state": "online", 00:17:49.242 "raid_level": "raid1", 00:17:49.242 "superblock": true, 00:17:49.242 "num_base_bdevs": 2, 00:17:49.242 "num_base_bdevs_discovered": 1, 00:17:49.242 "num_base_bdevs_operational": 1, 00:17:49.242 "base_bdevs_list": [ 00:17:49.242 { 00:17:49.242 "name": null, 00:17:49.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.242 "is_configured": false, 00:17:49.242 "data_offset": 0, 00:17:49.242 "data_size": 7936 00:17:49.242 }, 00:17:49.242 { 00:17:49.242 "name": "BaseBdev2", 00:17:49.242 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:49.242 "is_configured": true, 00:17:49.242 "data_offset": 256, 00:17:49.242 "data_size": 7936 00:17:49.242 } 00:17:49.242 ] 00:17:49.242 }' 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.242 17:59:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.808 17:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.808 17:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.808 17:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.808 [2024-10-25 17:59:08.031844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.808 [2024-10-25 17:59:08.047702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:49.808 17:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.808 17:59:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:49.808 [2024-10-25 17:59:08.050213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.745 "name": "raid_bdev1", 00:17:50.745 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:50.745 "strip_size_kb": 0, 00:17:50.745 "state": "online", 00:17:50.745 "raid_level": "raid1", 00:17:50.745 "superblock": true, 00:17:50.745 "num_base_bdevs": 2, 00:17:50.745 "num_base_bdevs_discovered": 2, 00:17:50.745 "num_base_bdevs_operational": 2, 00:17:50.745 "process": { 00:17:50.745 "type": "rebuild", 00:17:50.745 "target": "spare", 00:17:50.745 "progress": { 00:17:50.745 "blocks": 2560, 00:17:50.745 "percent": 32 00:17:50.745 } 00:17:50.745 }, 00:17:50.745 "base_bdevs_list": [ 00:17:50.745 { 00:17:50.745 "name": "spare", 00:17:50.745 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:50.745 "is_configured": true, 00:17:50.745 "data_offset": 256, 00:17:50.745 "data_size": 7936 00:17:50.745 }, 00:17:50.745 { 00:17:50.745 "name": "BaseBdev2", 00:17:50.745 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:50.745 "is_configured": true, 00:17:50.745 "data_offset": 256, 00:17:50.745 "data_size": 7936 00:17:50.745 } 00:17:50.745 ] 00:17:50.745 }' 00:17:50.745 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.746 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.746 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.005 [2024-10-25 17:59:09.217556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.005 [2024-10-25 17:59:09.256897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.005 [2024-10-25 17:59:09.256988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.005 [2024-10-25 17:59:09.257007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.005 [2024-10-25 17:59:09.257018] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.005 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.005 "name": "raid_bdev1", 00:17:51.005 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:51.006 "strip_size_kb": 0, 00:17:51.006 "state": "online", 00:17:51.006 "raid_level": "raid1", 00:17:51.006 "superblock": true, 00:17:51.006 "num_base_bdevs": 2, 00:17:51.006 "num_base_bdevs_discovered": 1, 00:17:51.006 "num_base_bdevs_operational": 1, 00:17:51.006 "base_bdevs_list": [ 00:17:51.006 { 00:17:51.006 "name": null, 00:17:51.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.006 "is_configured": false, 00:17:51.006 "data_offset": 0, 00:17:51.006 "data_size": 7936 00:17:51.006 }, 00:17:51.006 { 00:17:51.006 "name": "BaseBdev2", 00:17:51.006 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:51.006 "is_configured": true, 00:17:51.006 "data_offset": 256, 00:17:51.006 "data_size": 7936 00:17:51.006 } 00:17:51.006 ] 00:17:51.006 }' 00:17:51.006 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.006 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.265 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.265 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.265 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.266 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.526 "name": "raid_bdev1", 00:17:51.526 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:51.526 "strip_size_kb": 0, 00:17:51.526 "state": "online", 00:17:51.526 "raid_level": "raid1", 00:17:51.526 "superblock": true, 00:17:51.526 "num_base_bdevs": 2, 00:17:51.526 "num_base_bdevs_discovered": 1, 00:17:51.526 "num_base_bdevs_operational": 1, 00:17:51.526 "base_bdevs_list": [ 00:17:51.526 { 00:17:51.526 "name": null, 00:17:51.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.526 "is_configured": false, 00:17:51.526 "data_offset": 0, 00:17:51.526 "data_size": 7936 00:17:51.526 }, 00:17:51.526 { 00:17:51.526 "name": "BaseBdev2", 00:17:51.526 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:51.526 "is_configured": true, 00:17:51.526 "data_offset": 256, 00:17:51.526 "data_size": 7936 00:17:51.526 } 00:17:51.526 ] 00:17:51.526 }' 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.526 [2024-10-25 17:59:09.833031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.526 [2024-10-25 17:59:09.849407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.526 17:59:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:51.526 [2024-10-25 17:59:09.851615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.466 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.726 "name": "raid_bdev1", 00:17:52.726 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:52.726 "strip_size_kb": 0, 00:17:52.726 "state": "online", 00:17:52.726 "raid_level": "raid1", 00:17:52.726 "superblock": true, 00:17:52.726 "num_base_bdevs": 2, 00:17:52.726 "num_base_bdevs_discovered": 2, 00:17:52.726 "num_base_bdevs_operational": 2, 00:17:52.726 "process": { 00:17:52.726 "type": "rebuild", 00:17:52.726 "target": "spare", 00:17:52.726 "progress": { 00:17:52.726 "blocks": 2560, 00:17:52.726 "percent": 32 00:17:52.726 } 00:17:52.726 }, 00:17:52.726 "base_bdevs_list": [ 00:17:52.726 { 00:17:52.726 "name": "spare", 00:17:52.726 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 256, 00:17:52.726 "data_size": 7936 00:17:52.726 }, 00:17:52.726 { 00:17:52.726 "name": "BaseBdev2", 00:17:52.726 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 256, 00:17:52.726 "data_size": 7936 00:17:52.726 } 00:17:52.726 ] 00:17:52.726 }' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:52.726 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=715 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.726 17:59:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.726 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.726 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.726 "name": "raid_bdev1", 00:17:52.726 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:52.726 "strip_size_kb": 0, 00:17:52.726 "state": "online", 00:17:52.726 "raid_level": "raid1", 00:17:52.726 "superblock": true, 00:17:52.726 "num_base_bdevs": 2, 00:17:52.726 "num_base_bdevs_discovered": 2, 00:17:52.726 "num_base_bdevs_operational": 2, 00:17:52.726 "process": { 00:17:52.726 "type": "rebuild", 00:17:52.726 "target": "spare", 00:17:52.726 "progress": { 00:17:52.726 "blocks": 2816, 00:17:52.726 "percent": 35 00:17:52.726 } 00:17:52.726 }, 00:17:52.726 "base_bdevs_list": [ 00:17:52.726 { 00:17:52.726 "name": "spare", 00:17:52.726 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 256, 00:17:52.726 "data_size": 7936 00:17:52.726 }, 00:17:52.726 { 00:17:52.726 "name": "BaseBdev2", 00:17:52.726 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 256, 00:17:52.726 "data_size": 7936 00:17:52.726 } 00:17:52.726 ] 00:17:52.726 }' 00:17:52.727 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.727 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.727 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.727 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.727 17:59:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.104 "name": "raid_bdev1", 00:17:54.104 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:54.104 "strip_size_kb": 0, 00:17:54.104 "state": "online", 00:17:54.104 "raid_level": "raid1", 00:17:54.104 "superblock": true, 00:17:54.104 "num_base_bdevs": 2, 00:17:54.104 "num_base_bdevs_discovered": 2, 00:17:54.104 "num_base_bdevs_operational": 2, 00:17:54.104 "process": { 00:17:54.104 "type": "rebuild", 00:17:54.104 "target": "spare", 00:17:54.104 "progress": { 00:17:54.104 "blocks": 5632, 00:17:54.104 "percent": 70 00:17:54.104 } 00:17:54.104 }, 00:17:54.104 "base_bdevs_list": [ 00:17:54.104 { 00:17:54.104 "name": "spare", 00:17:54.104 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:54.104 "is_configured": true, 00:17:54.104 "data_offset": 256, 00:17:54.104 "data_size": 7936 00:17:54.104 }, 00:17:54.104 { 00:17:54.104 "name": "BaseBdev2", 00:17:54.104 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:54.104 "is_configured": true, 00:17:54.104 "data_offset": 256, 00:17:54.104 "data_size": 7936 00:17:54.104 } 00:17:54.104 ] 00:17:54.104 }' 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.104 17:59:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.671 [2024-10-25 17:59:12.967887] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:54.671 [2024-10-25 17:59:12.967989] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:54.671 [2024-10-25 17:59:12.968136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.931 "name": "raid_bdev1", 00:17:54.931 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:54.931 "strip_size_kb": 0, 00:17:54.931 "state": "online", 00:17:54.931 "raid_level": "raid1", 00:17:54.931 "superblock": true, 00:17:54.931 "num_base_bdevs": 2, 00:17:54.931 "num_base_bdevs_discovered": 2, 00:17:54.931 "num_base_bdevs_operational": 2, 00:17:54.931 "base_bdevs_list": [ 00:17:54.931 { 00:17:54.931 "name": "spare", 00:17:54.931 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:54.931 "is_configured": true, 00:17:54.931 "data_offset": 256, 00:17:54.931 "data_size": 7936 00:17:54.931 }, 00:17:54.931 { 00:17:54.931 "name": "BaseBdev2", 00:17:54.931 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:54.931 "is_configured": true, 00:17:54.931 "data_offset": 256, 00:17:54.931 "data_size": 7936 00:17:54.931 } 00:17:54.931 ] 00:17:54.931 }' 00:17:54.931 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.192 "name": "raid_bdev1", 00:17:55.192 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:55.192 "strip_size_kb": 0, 00:17:55.192 "state": "online", 00:17:55.192 "raid_level": "raid1", 00:17:55.192 "superblock": true, 00:17:55.192 "num_base_bdevs": 2, 00:17:55.192 "num_base_bdevs_discovered": 2, 00:17:55.192 "num_base_bdevs_operational": 2, 00:17:55.192 "base_bdevs_list": [ 00:17:55.192 { 00:17:55.192 "name": "spare", 00:17:55.192 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:55.192 "is_configured": true, 00:17:55.192 "data_offset": 256, 00:17:55.192 "data_size": 7936 00:17:55.192 }, 00:17:55.192 { 00:17:55.192 "name": "BaseBdev2", 00:17:55.192 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:55.192 "is_configured": true, 00:17:55.192 "data_offset": 256, 00:17:55.192 "data_size": 7936 00:17:55.192 } 00:17:55.192 ] 00:17:55.192 }' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.192 "name": "raid_bdev1", 00:17:55.192 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:55.192 "strip_size_kb": 0, 00:17:55.192 "state": "online", 00:17:55.192 "raid_level": "raid1", 00:17:55.192 "superblock": true, 00:17:55.192 "num_base_bdevs": 2, 00:17:55.192 "num_base_bdevs_discovered": 2, 00:17:55.192 "num_base_bdevs_operational": 2, 00:17:55.192 "base_bdevs_list": [ 00:17:55.192 { 00:17:55.192 "name": "spare", 00:17:55.192 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:55.192 "is_configured": true, 00:17:55.192 "data_offset": 256, 00:17:55.192 "data_size": 7936 00:17:55.192 }, 00:17:55.192 { 00:17:55.192 "name": "BaseBdev2", 00:17:55.192 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:55.192 "is_configured": true, 00:17:55.192 "data_offset": 256, 00:17:55.192 "data_size": 7936 00:17:55.192 } 00:17:55.192 ] 00:17:55.192 }' 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.192 17:59:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.761 [2024-10-25 17:59:14.050153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.761 [2024-10-25 17:59:14.050193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.761 [2024-10-25 17:59:14.050301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.761 [2024-10-25 17:59:14.050389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.761 [2024-10-25 17:59:14.050406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.761 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:55.762 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:56.060 /dev/nbd0 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.060 1+0 records in 00:17:56.060 1+0 records out 00:17:56.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298684 s, 13.7 MB/s 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.060 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:56.321 /dev/nbd1 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.321 1+0 records in 00:17:56.321 1+0 records out 00:17:56.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432931 s, 9.5 MB/s 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.321 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:56.580 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.581 17:59:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.841 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.101 [2024-10-25 17:59:15.462846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:57.101 [2024-10-25 17:59:15.462932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.101 [2024-10-25 17:59:15.462963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.101 [2024-10-25 17:59:15.462974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.101 [2024-10-25 17:59:15.465272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.101 [2024-10-25 17:59:15.465313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:57.101 [2024-10-25 17:59:15.465390] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:57.101 [2024-10-25 17:59:15.465459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.101 [2024-10-25 17:59:15.465609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.101 spare 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.101 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.362 [2024-10-25 17:59:15.565525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:57.362 [2024-10-25 17:59:15.565589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.362 [2024-10-25 17:59:15.565747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:57.362 [2024-10-25 17:59:15.565968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:57.362 [2024-10-25 17:59:15.565990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:57.362 [2024-10-25 17:59:15.566148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.362 "name": "raid_bdev1", 00:17:57.362 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:57.362 "strip_size_kb": 0, 00:17:57.362 "state": "online", 00:17:57.362 "raid_level": "raid1", 00:17:57.362 "superblock": true, 00:17:57.362 "num_base_bdevs": 2, 00:17:57.362 "num_base_bdevs_discovered": 2, 00:17:57.362 "num_base_bdevs_operational": 2, 00:17:57.362 "base_bdevs_list": [ 00:17:57.362 { 00:17:57.362 "name": "spare", 00:17:57.362 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:57.362 "is_configured": true, 00:17:57.362 "data_offset": 256, 00:17:57.362 "data_size": 7936 00:17:57.362 }, 00:17:57.362 { 00:17:57.362 "name": "BaseBdev2", 00:17:57.362 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:57.362 "is_configured": true, 00:17:57.362 "data_offset": 256, 00:17:57.362 "data_size": 7936 00:17:57.362 } 00:17:57.362 ] 00:17:57.362 }' 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.362 17:59:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.622 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.883 "name": "raid_bdev1", 00:17:57.883 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:57.883 "strip_size_kb": 0, 00:17:57.883 "state": "online", 00:17:57.883 "raid_level": "raid1", 00:17:57.883 "superblock": true, 00:17:57.883 "num_base_bdevs": 2, 00:17:57.883 "num_base_bdevs_discovered": 2, 00:17:57.883 "num_base_bdevs_operational": 2, 00:17:57.883 "base_bdevs_list": [ 00:17:57.883 { 00:17:57.883 "name": "spare", 00:17:57.883 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:57.883 "is_configured": true, 00:17:57.883 "data_offset": 256, 00:17:57.883 "data_size": 7936 00:17:57.883 }, 00:17:57.883 { 00:17:57.883 "name": "BaseBdev2", 00:17:57.883 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:57.883 "is_configured": true, 00:17:57.883 "data_offset": 256, 00:17:57.883 "data_size": 7936 00:17:57.883 } 00:17:57.883 ] 00:17:57.883 }' 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.883 [2024-10-25 17:59:16.229816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.883 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.883 "name": "raid_bdev1", 00:17:57.883 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:57.883 "strip_size_kb": 0, 00:17:57.883 "state": "online", 00:17:57.883 "raid_level": "raid1", 00:17:57.883 "superblock": true, 00:17:57.883 "num_base_bdevs": 2, 00:17:57.883 "num_base_bdevs_discovered": 1, 00:17:57.883 "num_base_bdevs_operational": 1, 00:17:57.883 "base_bdevs_list": [ 00:17:57.883 { 00:17:57.883 "name": null, 00:17:57.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.883 "is_configured": false, 00:17:57.883 "data_offset": 0, 00:17:57.883 "data_size": 7936 00:17:57.883 }, 00:17:57.883 { 00:17:57.883 "name": "BaseBdev2", 00:17:57.883 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:57.884 "is_configured": true, 00:17:57.884 "data_offset": 256, 00:17:57.884 "data_size": 7936 00:17:57.884 } 00:17:57.884 ] 00:17:57.884 }' 00:17:57.884 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.884 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.453 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.453 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.453 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.453 [2024-10-25 17:59:16.669094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.453 [2024-10-25 17:59:16.669318] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.453 [2024-10-25 17:59:16.669347] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:58.453 [2024-10-25 17:59:16.669387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.453 [2024-10-25 17:59:16.686683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:58.453 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.453 17:59:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:58.453 [2024-10-25 17:59:16.689115] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.392 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.392 "name": "raid_bdev1", 00:17:59.392 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:59.392 "strip_size_kb": 0, 00:17:59.392 "state": "online", 00:17:59.392 "raid_level": "raid1", 00:17:59.392 "superblock": true, 00:17:59.392 "num_base_bdevs": 2, 00:17:59.392 "num_base_bdevs_discovered": 2, 00:17:59.392 "num_base_bdevs_operational": 2, 00:17:59.392 "process": { 00:17:59.392 "type": "rebuild", 00:17:59.393 "target": "spare", 00:17:59.393 "progress": { 00:17:59.393 "blocks": 2560, 00:17:59.393 "percent": 32 00:17:59.393 } 00:17:59.393 }, 00:17:59.393 "base_bdevs_list": [ 00:17:59.393 { 00:17:59.393 "name": "spare", 00:17:59.393 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:17:59.393 "is_configured": true, 00:17:59.393 "data_offset": 256, 00:17:59.393 "data_size": 7936 00:17:59.393 }, 00:17:59.393 { 00:17:59.393 "name": "BaseBdev2", 00:17:59.393 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:59.393 "is_configured": true, 00:17:59.393 "data_offset": 256, 00:17:59.393 "data_size": 7936 00:17:59.393 } 00:17:59.393 ] 00:17:59.393 }' 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.393 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.393 [2024-10-25 17:59:17.821070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.653 [2024-10-25 17:59:17.895486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.653 [2024-10-25 17:59:17.895598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.653 [2024-10-25 17:59:17.895618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.653 [2024-10-25 17:59:17.895655] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.653 "name": "raid_bdev1", 00:17:59.653 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:17:59.653 "strip_size_kb": 0, 00:17:59.653 "state": "online", 00:17:59.653 "raid_level": "raid1", 00:17:59.653 "superblock": true, 00:17:59.653 "num_base_bdevs": 2, 00:17:59.653 "num_base_bdevs_discovered": 1, 00:17:59.653 "num_base_bdevs_operational": 1, 00:17:59.653 "base_bdevs_list": [ 00:17:59.653 { 00:17:59.653 "name": null, 00:17:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.653 "is_configured": false, 00:17:59.653 "data_offset": 0, 00:17:59.653 "data_size": 7936 00:17:59.653 }, 00:17:59.653 { 00:17:59.653 "name": "BaseBdev2", 00:17:59.653 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:17:59.653 "is_configured": true, 00:17:59.653 "data_offset": 256, 00:17:59.653 "data_size": 7936 00:17:59.653 } 00:17:59.653 ] 00:17:59.653 }' 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.653 17:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.221 17:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:00.221 17:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.221 17:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.221 [2024-10-25 17:59:18.451061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.221 [2024-10-25 17:59:18.451168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.221 [2024-10-25 17:59:18.451212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:00.221 [2024-10-25 17:59:18.451237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.221 [2024-10-25 17:59:18.451624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.221 [2024-10-25 17:59:18.451676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.221 [2024-10-25 17:59:18.451778] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:00.222 [2024-10-25 17:59:18.451815] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.222 [2024-10-25 17:59:18.451852] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:00.222 [2024-10-25 17:59:18.451898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.222 [2024-10-25 17:59:18.468966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:00.222 spare 00:18:00.222 17:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.222 17:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:00.222 [2024-10-25 17:59:18.471730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.160 "name": "raid_bdev1", 00:18:01.160 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:01.160 "strip_size_kb": 0, 00:18:01.160 "state": "online", 00:18:01.160 "raid_level": "raid1", 00:18:01.160 "superblock": true, 00:18:01.160 "num_base_bdevs": 2, 00:18:01.160 "num_base_bdevs_discovered": 2, 00:18:01.160 "num_base_bdevs_operational": 2, 00:18:01.160 "process": { 00:18:01.160 "type": "rebuild", 00:18:01.160 "target": "spare", 00:18:01.160 "progress": { 00:18:01.160 "blocks": 2560, 00:18:01.160 "percent": 32 00:18:01.160 } 00:18:01.160 }, 00:18:01.160 "base_bdevs_list": [ 00:18:01.160 { 00:18:01.160 "name": "spare", 00:18:01.160 "uuid": "3967d990-139c-549b-87ed-e5ffde7cb238", 00:18:01.160 "is_configured": true, 00:18:01.160 "data_offset": 256, 00:18:01.160 "data_size": 7936 00:18:01.160 }, 00:18:01.160 { 00:18:01.160 "name": "BaseBdev2", 00:18:01.160 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:01.160 "is_configured": true, 00:18:01.160 "data_offset": 256, 00:18:01.160 "data_size": 7936 00:18:01.160 } 00:18:01.160 ] 00:18:01.160 }' 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.160 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.420 [2024-10-25 17:59:19.635040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.420 [2024-10-25 17:59:19.678193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:01.420 [2024-10-25 17:59:19.678267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.420 [2024-10-25 17:59:19.678288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.420 [2024-10-25 17:59:19.678297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.420 "name": "raid_bdev1", 00:18:01.420 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:01.420 "strip_size_kb": 0, 00:18:01.420 "state": "online", 00:18:01.420 "raid_level": "raid1", 00:18:01.420 "superblock": true, 00:18:01.420 "num_base_bdevs": 2, 00:18:01.420 "num_base_bdevs_discovered": 1, 00:18:01.420 "num_base_bdevs_operational": 1, 00:18:01.420 "base_bdevs_list": [ 00:18:01.420 { 00:18:01.420 "name": null, 00:18:01.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.420 "is_configured": false, 00:18:01.420 "data_offset": 0, 00:18:01.420 "data_size": 7936 00:18:01.420 }, 00:18:01.420 { 00:18:01.420 "name": "BaseBdev2", 00:18:01.420 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:01.420 "is_configured": true, 00:18:01.420 "data_offset": 256, 00:18:01.420 "data_size": 7936 00:18:01.420 } 00:18:01.420 ] 00:18:01.420 }' 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.420 17:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.991 "name": "raid_bdev1", 00:18:01.991 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:01.991 "strip_size_kb": 0, 00:18:01.991 "state": "online", 00:18:01.991 "raid_level": "raid1", 00:18:01.991 "superblock": true, 00:18:01.991 "num_base_bdevs": 2, 00:18:01.991 "num_base_bdevs_discovered": 1, 00:18:01.991 "num_base_bdevs_operational": 1, 00:18:01.991 "base_bdevs_list": [ 00:18:01.991 { 00:18:01.991 "name": null, 00:18:01.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.991 "is_configured": false, 00:18:01.991 "data_offset": 0, 00:18:01.991 "data_size": 7936 00:18:01.991 }, 00:18:01.991 { 00:18:01.991 "name": "BaseBdev2", 00:18:01.991 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:01.991 "is_configured": true, 00:18:01.991 "data_offset": 256, 00:18:01.991 "data_size": 7936 00:18:01.991 } 00:18:01.991 ] 00:18:01.991 }' 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.991 [2024-10-25 17:59:20.325279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:01.991 [2024-10-25 17:59:20.325362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.991 [2024-10-25 17:59:20.325392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:01.991 [2024-10-25 17:59:20.325403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.991 [2024-10-25 17:59:20.325662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.991 [2024-10-25 17:59:20.325686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:01.991 [2024-10-25 17:59:20.325750] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:01.991 [2024-10-25 17:59:20.325768] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.991 [2024-10-25 17:59:20.325780] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:01.991 [2024-10-25 17:59:20.325792] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:01.991 BaseBdev1 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.991 17:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.930 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.195 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.195 "name": "raid_bdev1", 00:18:03.195 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:03.195 "strip_size_kb": 0, 00:18:03.195 "state": "online", 00:18:03.195 "raid_level": "raid1", 00:18:03.195 "superblock": true, 00:18:03.195 "num_base_bdevs": 2, 00:18:03.195 "num_base_bdevs_discovered": 1, 00:18:03.195 "num_base_bdevs_operational": 1, 00:18:03.195 "base_bdevs_list": [ 00:18:03.195 { 00:18:03.195 "name": null, 00:18:03.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.195 "is_configured": false, 00:18:03.195 "data_offset": 0, 00:18:03.195 "data_size": 7936 00:18:03.195 }, 00:18:03.195 { 00:18:03.195 "name": "BaseBdev2", 00:18:03.195 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:03.195 "is_configured": true, 00:18:03.195 "data_offset": 256, 00:18:03.195 "data_size": 7936 00:18:03.195 } 00:18:03.195 ] 00:18:03.195 }' 00:18:03.195 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.195 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.463 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.723 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.723 "name": "raid_bdev1", 00:18:03.723 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:03.723 "strip_size_kb": 0, 00:18:03.723 "state": "online", 00:18:03.723 "raid_level": "raid1", 00:18:03.723 "superblock": true, 00:18:03.723 "num_base_bdevs": 2, 00:18:03.723 "num_base_bdevs_discovered": 1, 00:18:03.723 "num_base_bdevs_operational": 1, 00:18:03.723 "base_bdevs_list": [ 00:18:03.723 { 00:18:03.723 "name": null, 00:18:03.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.723 "is_configured": false, 00:18:03.723 "data_offset": 0, 00:18:03.723 "data_size": 7936 00:18:03.723 }, 00:18:03.723 { 00:18:03.723 "name": "BaseBdev2", 00:18:03.723 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:03.723 "is_configured": true, 00:18:03.723 "data_offset": 256, 00:18:03.723 "data_size": 7936 00:18:03.723 } 00:18:03.723 ] 00:18:03.723 }' 00:18:03.723 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.723 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.723 17:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.723 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.723 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.723 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.724 [2024-10-25 17:59:22.027192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.724 [2024-10-25 17:59:22.027400] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.724 [2024-10-25 17:59:22.027418] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:03.724 request: 00:18:03.724 { 00:18:03.724 "base_bdev": "BaseBdev1", 00:18:03.724 "raid_bdev": "raid_bdev1", 00:18:03.724 "method": "bdev_raid_add_base_bdev", 00:18:03.724 "req_id": 1 00:18:03.724 } 00:18:03.724 Got JSON-RPC error response 00:18:03.724 response: 00:18:03.724 { 00:18:03.724 "code": -22, 00:18:03.724 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:03.724 } 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.724 17:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.664 "name": "raid_bdev1", 00:18:04.664 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:04.664 "strip_size_kb": 0, 00:18:04.664 "state": "online", 00:18:04.664 "raid_level": "raid1", 00:18:04.664 "superblock": true, 00:18:04.664 "num_base_bdevs": 2, 00:18:04.664 "num_base_bdevs_discovered": 1, 00:18:04.664 "num_base_bdevs_operational": 1, 00:18:04.664 "base_bdevs_list": [ 00:18:04.664 { 00:18:04.664 "name": null, 00:18:04.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.664 "is_configured": false, 00:18:04.664 "data_offset": 0, 00:18:04.664 "data_size": 7936 00:18:04.664 }, 00:18:04.664 { 00:18:04.664 "name": "BaseBdev2", 00:18:04.664 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:04.664 "is_configured": true, 00:18:04.664 "data_offset": 256, 00:18:04.664 "data_size": 7936 00:18:04.664 } 00:18:04.664 ] 00:18:04.664 }' 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.664 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.232 "name": "raid_bdev1", 00:18:05.232 "uuid": "2dbf4d13-27d0-4229-8d3e-f02717c9ba27", 00:18:05.232 "strip_size_kb": 0, 00:18:05.232 "state": "online", 00:18:05.232 "raid_level": "raid1", 00:18:05.232 "superblock": true, 00:18:05.232 "num_base_bdevs": 2, 00:18:05.232 "num_base_bdevs_discovered": 1, 00:18:05.232 "num_base_bdevs_operational": 1, 00:18:05.232 "base_bdevs_list": [ 00:18:05.232 { 00:18:05.232 "name": null, 00:18:05.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.232 "is_configured": false, 00:18:05.232 "data_offset": 0, 00:18:05.232 "data_size": 7936 00:18:05.232 }, 00:18:05.232 { 00:18:05.232 "name": "BaseBdev2", 00:18:05.232 "uuid": "8bd04471-6255-5266-a75b-3e4ae014f666", 00:18:05.232 "is_configured": true, 00:18:05.232 "data_offset": 256, 00:18:05.232 "data_size": 7936 00:18:05.232 } 00:18:05.232 ] 00:18:05.232 }' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87688 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87688 ']' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87688 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.232 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87688 00:18:05.492 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.492 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.492 killing process with pid 87688 00:18:05.492 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87688' 00:18:05.492 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87688 00:18:05.492 Received shutdown signal, test time was about 60.000000 seconds 00:18:05.492 00:18:05.492 Latency(us) 00:18:05.492 [2024-10-25T17:59:23.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.492 [2024-10-25T17:59:23.928Z] =================================================================================================================== 00:18:05.492 [2024-10-25T17:59:23.928Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.492 [2024-10-25 17:59:23.672135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.492 17:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87688 00:18:05.492 [2024-10-25 17:59:23.672286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.492 [2024-10-25 17:59:23.672346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.492 [2024-10-25 17:59:23.672369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:05.751 [2024-10-25 17:59:24.066717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.127 17:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:07.127 00:18:07.127 real 0m20.903s 00:18:07.127 user 0m27.310s 00:18:07.127 sys 0m2.809s 00:18:07.127 17:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.127 17:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.127 ************************************ 00:18:07.127 END TEST raid_rebuild_test_sb_md_separate 00:18:07.127 ************************************ 00:18:07.127 17:59:25 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:07.127 17:59:25 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:07.127 17:59:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:07.127 17:59:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.127 17:59:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.127 ************************************ 00:18:07.127 START TEST raid_state_function_test_sb_md_interleaved 00:18:07.127 ************************************ 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88384 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88384' 00:18:07.127 Process raid pid: 88384 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88384 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88384 ']' 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.127 17:59:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.127 [2024-10-25 17:59:25.522529] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:07.127 [2024-10-25 17:59:25.522724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.385 [2024-10-25 17:59:25.702395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.642 [2024-10-25 17:59:25.858798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.900 [2024-10-25 17:59:26.083189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.900 [2024-10-25 17:59:26.083250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 [2024-10-25 17:59:26.492177] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.157 [2024-10-25 17:59:26.492245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.157 [2024-10-25 17:59:26.492260] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.157 [2024-10-25 17:59:26.492274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.157 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.158 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.158 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.158 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.158 "name": "Existed_Raid", 00:18:08.158 "uuid": "0b41dc3c-5752-4258-acb0-d254edcab4ed", 00:18:08.158 "strip_size_kb": 0, 00:18:08.158 "state": "configuring", 00:18:08.158 "raid_level": "raid1", 00:18:08.158 "superblock": true, 00:18:08.158 "num_base_bdevs": 2, 00:18:08.158 "num_base_bdevs_discovered": 0, 00:18:08.158 "num_base_bdevs_operational": 2, 00:18:08.158 "base_bdevs_list": [ 00:18:08.158 { 00:18:08.158 "name": "BaseBdev1", 00:18:08.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.158 "is_configured": false, 00:18:08.158 "data_offset": 0, 00:18:08.158 "data_size": 0 00:18:08.158 }, 00:18:08.158 { 00:18:08.158 "name": "BaseBdev2", 00:18:08.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.158 "is_configured": false, 00:18:08.158 "data_offset": 0, 00:18:08.158 "data_size": 0 00:18:08.158 } 00:18:08.158 ] 00:18:08.158 }' 00:18:08.158 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.158 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 [2024-10-25 17:59:26.948047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.721 [2024-10-25 17:59:26.948099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 [2024-10-25 17:59:26.960065] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.721 [2024-10-25 17:59:26.960124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.721 [2024-10-25 17:59:26.960136] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.721 [2024-10-25 17:59:26.960152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 [2024-10-25 17:59:27.006370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.721 BaseBdev1 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 [ 00:18:08.721 { 00:18:08.721 "name": "BaseBdev1", 00:18:08.721 "aliases": [ 00:18:08.721 "41236e2d-8ea6-4d64-a99b-3fc57201dfc4" 00:18:08.721 ], 00:18:08.721 "product_name": "Malloc disk", 00:18:08.721 "block_size": 4128, 00:18:08.721 "num_blocks": 8192, 00:18:08.721 "uuid": "41236e2d-8ea6-4d64-a99b-3fc57201dfc4", 00:18:08.721 "md_size": 32, 00:18:08.721 "md_interleave": true, 00:18:08.721 "dif_type": 0, 00:18:08.721 "assigned_rate_limits": { 00:18:08.721 "rw_ios_per_sec": 0, 00:18:08.721 "rw_mbytes_per_sec": 0, 00:18:08.721 "r_mbytes_per_sec": 0, 00:18:08.721 "w_mbytes_per_sec": 0 00:18:08.721 }, 00:18:08.721 "claimed": true, 00:18:08.721 "claim_type": "exclusive_write", 00:18:08.721 "zoned": false, 00:18:08.721 "supported_io_types": { 00:18:08.721 "read": true, 00:18:08.721 "write": true, 00:18:08.721 "unmap": true, 00:18:08.721 "flush": true, 00:18:08.721 "reset": true, 00:18:08.721 "nvme_admin": false, 00:18:08.721 "nvme_io": false, 00:18:08.721 "nvme_io_md": false, 00:18:08.721 "write_zeroes": true, 00:18:08.721 "zcopy": true, 00:18:08.721 "get_zone_info": false, 00:18:08.721 "zone_management": false, 00:18:08.721 "zone_append": false, 00:18:08.721 "compare": false, 00:18:08.721 "compare_and_write": false, 00:18:08.721 "abort": true, 00:18:08.721 "seek_hole": false, 00:18:08.721 "seek_data": false, 00:18:08.721 "copy": true, 00:18:08.721 "nvme_iov_md": false 00:18:08.721 }, 00:18:08.721 "memory_domains": [ 00:18:08.721 { 00:18:08.721 "dma_device_id": "system", 00:18:08.721 "dma_device_type": 1 00:18:08.721 }, 00:18:08.721 { 00:18:08.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.721 "dma_device_type": 2 00:18:08.721 } 00:18:08.721 ], 00:18:08.721 "driver_specific": {} 00:18:08.721 } 00:18:08.721 ] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.721 "name": "Existed_Raid", 00:18:08.721 "uuid": "5464eaf4-fb1f-4ef7-aa66-4237e549cb52", 00:18:08.721 "strip_size_kb": 0, 00:18:08.721 "state": "configuring", 00:18:08.721 "raid_level": "raid1", 00:18:08.721 "superblock": true, 00:18:08.721 "num_base_bdevs": 2, 00:18:08.721 "num_base_bdevs_discovered": 1, 00:18:08.721 "num_base_bdevs_operational": 2, 00:18:08.721 "base_bdevs_list": [ 00:18:08.721 { 00:18:08.721 "name": "BaseBdev1", 00:18:08.721 "uuid": "41236e2d-8ea6-4d64-a99b-3fc57201dfc4", 00:18:08.721 "is_configured": true, 00:18:08.721 "data_offset": 256, 00:18:08.721 "data_size": 7936 00:18:08.721 }, 00:18:08.721 { 00:18:08.721 "name": "BaseBdev2", 00:18:08.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.721 "is_configured": false, 00:18:08.721 "data_offset": 0, 00:18:08.721 "data_size": 0 00:18:08.721 } 00:18:08.721 ] 00:18:08.721 }' 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.721 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 [2024-10-25 17:59:27.422007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.283 [2024-10-25 17:59:27.422087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 [2024-10-25 17:59:27.434101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.283 [2024-10-25 17:59:27.436517] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.283 [2024-10-25 17:59:27.436576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.283 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.283 "name": "Existed_Raid", 00:18:09.283 "uuid": "29e2147b-604d-4285-b835-9fa4be2b347c", 00:18:09.283 "strip_size_kb": 0, 00:18:09.283 "state": "configuring", 00:18:09.283 "raid_level": "raid1", 00:18:09.283 "superblock": true, 00:18:09.283 "num_base_bdevs": 2, 00:18:09.284 "num_base_bdevs_discovered": 1, 00:18:09.284 "num_base_bdevs_operational": 2, 00:18:09.284 "base_bdevs_list": [ 00:18:09.284 { 00:18:09.284 "name": "BaseBdev1", 00:18:09.284 "uuid": "41236e2d-8ea6-4d64-a99b-3fc57201dfc4", 00:18:09.284 "is_configured": true, 00:18:09.284 "data_offset": 256, 00:18:09.284 "data_size": 7936 00:18:09.284 }, 00:18:09.284 { 00:18:09.284 "name": "BaseBdev2", 00:18:09.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.284 "is_configured": false, 00:18:09.284 "data_offset": 0, 00:18:09.284 "data_size": 0 00:18:09.284 } 00:18:09.284 ] 00:18:09.284 }' 00:18:09.284 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.284 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.598 [2024-10-25 17:59:27.864516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.598 [2024-10-25 17:59:27.864849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:09.598 [2024-10-25 17:59:27.864875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:09.598 [2024-10-25 17:59:27.864994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:09.598 [2024-10-25 17:59:27.865092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:09.598 [2024-10-25 17:59:27.865110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:09.598 [2024-10-25 17:59:27.865195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.598 BaseBdev2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.598 [ 00:18:09.598 { 00:18:09.598 "name": "BaseBdev2", 00:18:09.598 "aliases": [ 00:18:09.598 "2cfd7141-2a6f-479f-99a3-c3c9eec94a67" 00:18:09.598 ], 00:18:09.598 "product_name": "Malloc disk", 00:18:09.598 "block_size": 4128, 00:18:09.598 "num_blocks": 8192, 00:18:09.598 "uuid": "2cfd7141-2a6f-479f-99a3-c3c9eec94a67", 00:18:09.598 "md_size": 32, 00:18:09.598 "md_interleave": true, 00:18:09.598 "dif_type": 0, 00:18:09.598 "assigned_rate_limits": { 00:18:09.598 "rw_ios_per_sec": 0, 00:18:09.598 "rw_mbytes_per_sec": 0, 00:18:09.598 "r_mbytes_per_sec": 0, 00:18:09.598 "w_mbytes_per_sec": 0 00:18:09.598 }, 00:18:09.598 "claimed": true, 00:18:09.598 "claim_type": "exclusive_write", 00:18:09.598 "zoned": false, 00:18:09.598 "supported_io_types": { 00:18:09.598 "read": true, 00:18:09.598 "write": true, 00:18:09.598 "unmap": true, 00:18:09.598 "flush": true, 00:18:09.598 "reset": true, 00:18:09.598 "nvme_admin": false, 00:18:09.598 "nvme_io": false, 00:18:09.598 "nvme_io_md": false, 00:18:09.598 "write_zeroes": true, 00:18:09.598 "zcopy": true, 00:18:09.598 "get_zone_info": false, 00:18:09.598 "zone_management": false, 00:18:09.598 "zone_append": false, 00:18:09.598 "compare": false, 00:18:09.598 "compare_and_write": false, 00:18:09.598 "abort": true, 00:18:09.598 "seek_hole": false, 00:18:09.598 "seek_data": false, 00:18:09.598 "copy": true, 00:18:09.598 "nvme_iov_md": false 00:18:09.598 }, 00:18:09.598 "memory_domains": [ 00:18:09.598 { 00:18:09.598 "dma_device_id": "system", 00:18:09.598 "dma_device_type": 1 00:18:09.598 }, 00:18:09.598 { 00:18:09.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.598 "dma_device_type": 2 00:18:09.598 } 00:18:09.598 ], 00:18:09.598 "driver_specific": {} 00:18:09.598 } 00:18:09.598 ] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.598 "name": "Existed_Raid", 00:18:09.598 "uuid": "29e2147b-604d-4285-b835-9fa4be2b347c", 00:18:09.598 "strip_size_kb": 0, 00:18:09.598 "state": "online", 00:18:09.598 "raid_level": "raid1", 00:18:09.598 "superblock": true, 00:18:09.598 "num_base_bdevs": 2, 00:18:09.598 "num_base_bdevs_discovered": 2, 00:18:09.598 "num_base_bdevs_operational": 2, 00:18:09.598 "base_bdevs_list": [ 00:18:09.598 { 00:18:09.598 "name": "BaseBdev1", 00:18:09.598 "uuid": "41236e2d-8ea6-4d64-a99b-3fc57201dfc4", 00:18:09.598 "is_configured": true, 00:18:09.598 "data_offset": 256, 00:18:09.598 "data_size": 7936 00:18:09.598 }, 00:18:09.598 { 00:18:09.598 "name": "BaseBdev2", 00:18:09.598 "uuid": "2cfd7141-2a6f-479f-99a3-c3c9eec94a67", 00:18:09.598 "is_configured": true, 00:18:09.598 "data_offset": 256, 00:18:09.598 "data_size": 7936 00:18:09.598 } 00:18:09.598 ] 00:18:09.598 }' 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.598 17:59:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.161 [2024-10-25 17:59:28.312205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.161 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.161 "name": "Existed_Raid", 00:18:10.161 "aliases": [ 00:18:10.161 "29e2147b-604d-4285-b835-9fa4be2b347c" 00:18:10.161 ], 00:18:10.161 "product_name": "Raid Volume", 00:18:10.161 "block_size": 4128, 00:18:10.161 "num_blocks": 7936, 00:18:10.161 "uuid": "29e2147b-604d-4285-b835-9fa4be2b347c", 00:18:10.161 "md_size": 32, 00:18:10.161 "md_interleave": true, 00:18:10.161 "dif_type": 0, 00:18:10.161 "assigned_rate_limits": { 00:18:10.161 "rw_ios_per_sec": 0, 00:18:10.161 "rw_mbytes_per_sec": 0, 00:18:10.161 "r_mbytes_per_sec": 0, 00:18:10.161 "w_mbytes_per_sec": 0 00:18:10.161 }, 00:18:10.161 "claimed": false, 00:18:10.161 "zoned": false, 00:18:10.161 "supported_io_types": { 00:18:10.161 "read": true, 00:18:10.161 "write": true, 00:18:10.161 "unmap": false, 00:18:10.161 "flush": false, 00:18:10.161 "reset": true, 00:18:10.161 "nvme_admin": false, 00:18:10.161 "nvme_io": false, 00:18:10.161 "nvme_io_md": false, 00:18:10.161 "write_zeroes": true, 00:18:10.161 "zcopy": false, 00:18:10.161 "get_zone_info": false, 00:18:10.161 "zone_management": false, 00:18:10.161 "zone_append": false, 00:18:10.161 "compare": false, 00:18:10.161 "compare_and_write": false, 00:18:10.161 "abort": false, 00:18:10.161 "seek_hole": false, 00:18:10.161 "seek_data": false, 00:18:10.161 "copy": false, 00:18:10.161 "nvme_iov_md": false 00:18:10.161 }, 00:18:10.161 "memory_domains": [ 00:18:10.161 { 00:18:10.161 "dma_device_id": "system", 00:18:10.161 "dma_device_type": 1 00:18:10.161 }, 00:18:10.161 { 00:18:10.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.161 "dma_device_type": 2 00:18:10.161 }, 00:18:10.161 { 00:18:10.161 "dma_device_id": "system", 00:18:10.161 "dma_device_type": 1 00:18:10.161 }, 00:18:10.161 { 00:18:10.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.161 "dma_device_type": 2 00:18:10.161 } 00:18:10.161 ], 00:18:10.161 "driver_specific": { 00:18:10.161 "raid": { 00:18:10.161 "uuid": "29e2147b-604d-4285-b835-9fa4be2b347c", 00:18:10.161 "strip_size_kb": 0, 00:18:10.162 "state": "online", 00:18:10.162 "raid_level": "raid1", 00:18:10.162 "superblock": true, 00:18:10.162 "num_base_bdevs": 2, 00:18:10.162 "num_base_bdevs_discovered": 2, 00:18:10.162 "num_base_bdevs_operational": 2, 00:18:10.162 "base_bdevs_list": [ 00:18:10.162 { 00:18:10.162 "name": "BaseBdev1", 00:18:10.162 "uuid": "41236e2d-8ea6-4d64-a99b-3fc57201dfc4", 00:18:10.162 "is_configured": true, 00:18:10.162 "data_offset": 256, 00:18:10.162 "data_size": 7936 00:18:10.162 }, 00:18:10.162 { 00:18:10.162 "name": "BaseBdev2", 00:18:10.162 "uuid": "2cfd7141-2a6f-479f-99a3-c3c9eec94a67", 00:18:10.162 "is_configured": true, 00:18:10.162 "data_offset": 256, 00:18:10.162 "data_size": 7936 00:18:10.162 } 00:18:10.162 ] 00:18:10.162 } 00:18:10.162 } 00:18:10.162 }' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:10.162 BaseBdev2' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.162 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.162 [2024-10-25 17:59:28.547508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.419 "name": "Existed_Raid", 00:18:10.419 "uuid": "29e2147b-604d-4285-b835-9fa4be2b347c", 00:18:10.419 "strip_size_kb": 0, 00:18:10.419 "state": "online", 00:18:10.419 "raid_level": "raid1", 00:18:10.419 "superblock": true, 00:18:10.419 "num_base_bdevs": 2, 00:18:10.419 "num_base_bdevs_discovered": 1, 00:18:10.419 "num_base_bdevs_operational": 1, 00:18:10.419 "base_bdevs_list": [ 00:18:10.419 { 00:18:10.419 "name": null, 00:18:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.419 "is_configured": false, 00:18:10.419 "data_offset": 0, 00:18:10.419 "data_size": 7936 00:18:10.419 }, 00:18:10.419 { 00:18:10.419 "name": "BaseBdev2", 00:18:10.419 "uuid": "2cfd7141-2a6f-479f-99a3-c3c9eec94a67", 00:18:10.419 "is_configured": true, 00:18:10.419 "data_offset": 256, 00:18:10.419 "data_size": 7936 00:18:10.419 } 00:18:10.419 ] 00:18:10.419 }' 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.419 17:59:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.987 [2024-10-25 17:59:29.217325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.987 [2024-10-25 17:59:29.217459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.987 [2024-10-25 17:59:29.335425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.987 [2024-10-25 17:59:29.335493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.987 [2024-10-25 17:59:29.335508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88384 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88384 ']' 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88384 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:10.987 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.988 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88384 00:18:11.246 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.246 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.246 killing process with pid 88384 00:18:11.246 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88384' 00:18:11.246 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88384 00:18:11.246 [2024-10-25 17:59:29.431282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.246 17:59:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88384 00:18:11.246 [2024-10-25 17:59:29.452153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.730 17:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:12.730 00:18:12.730 real 0m5.390s 00:18:12.730 user 0m7.643s 00:18:12.730 sys 0m0.834s 00:18:12.730 17:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.730 17:59:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.730 ************************************ 00:18:12.730 END TEST raid_state_function_test_sb_md_interleaved 00:18:12.730 ************************************ 00:18:12.730 17:59:30 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:12.730 17:59:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:12.730 17:59:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.730 17:59:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.730 ************************************ 00:18:12.730 START TEST raid_superblock_test_md_interleaved 00:18:12.730 ************************************ 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88636 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88636 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88636 ']' 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.730 17:59:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.730 [2024-10-25 17:59:30.973305] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:12.730 [2024-10-25 17:59:30.973496] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88636 ] 00:18:12.730 [2024-10-25 17:59:31.153131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.991 [2024-10-25 17:59:31.309152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.256 [2024-10-25 17:59:31.550994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.256 [2024-10-25 17:59:31.551064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.516 malloc1 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.516 [2024-10-25 17:59:31.942503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.516 [2024-10-25 17:59:31.942571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.516 [2024-10-25 17:59:31.942596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.516 [2024-10-25 17:59:31.942609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.516 [2024-10-25 17:59:31.944773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.516 [2024-10-25 17:59:31.944817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.516 pt1 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.516 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 malloc2 00:18:13.776 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.776 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.776 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.776 17:59:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 [2024-10-25 17:59:32.004008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.776 [2024-10-25 17:59:32.004073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.776 [2024-10-25 17:59:32.004098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:13.776 [2024-10-25 17:59:32.004110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.776 [2024-10-25 17:59:32.006241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.776 [2024-10-25 17:59:32.006282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.776 pt2 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.776 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 [2024-10-25 17:59:32.016029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.776 [2024-10-25 17:59:32.018120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.776 [2024-10-25 17:59:32.018334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:13.776 [2024-10-25 17:59:32.018358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:13.777 [2024-10-25 17:59:32.018442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.777 [2024-10-25 17:59:32.018529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:13.777 [2024-10-25 17:59:32.018545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:13.777 [2024-10-25 17:59:32.018626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.777 "name": "raid_bdev1", 00:18:13.777 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:13.777 "strip_size_kb": 0, 00:18:13.777 "state": "online", 00:18:13.777 "raid_level": "raid1", 00:18:13.777 "superblock": true, 00:18:13.777 "num_base_bdevs": 2, 00:18:13.777 "num_base_bdevs_discovered": 2, 00:18:13.777 "num_base_bdevs_operational": 2, 00:18:13.777 "base_bdevs_list": [ 00:18:13.777 { 00:18:13.777 "name": "pt1", 00:18:13.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.777 "is_configured": true, 00:18:13.777 "data_offset": 256, 00:18:13.777 "data_size": 7936 00:18:13.777 }, 00:18:13.777 { 00:18:13.777 "name": "pt2", 00:18:13.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.777 "is_configured": true, 00:18:13.777 "data_offset": 256, 00:18:13.777 "data_size": 7936 00:18:13.777 } 00:18:13.777 ] 00:18:13.777 }' 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.777 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.348 [2024-10-25 17:59:32.507642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.348 "name": "raid_bdev1", 00:18:14.348 "aliases": [ 00:18:14.348 "a542acf6-563a-4d2d-b052-e7559c336138" 00:18:14.348 ], 00:18:14.348 "product_name": "Raid Volume", 00:18:14.348 "block_size": 4128, 00:18:14.348 "num_blocks": 7936, 00:18:14.348 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:14.348 "md_size": 32, 00:18:14.348 "md_interleave": true, 00:18:14.348 "dif_type": 0, 00:18:14.348 "assigned_rate_limits": { 00:18:14.348 "rw_ios_per_sec": 0, 00:18:14.348 "rw_mbytes_per_sec": 0, 00:18:14.348 "r_mbytes_per_sec": 0, 00:18:14.348 "w_mbytes_per_sec": 0 00:18:14.348 }, 00:18:14.348 "claimed": false, 00:18:14.348 "zoned": false, 00:18:14.348 "supported_io_types": { 00:18:14.348 "read": true, 00:18:14.348 "write": true, 00:18:14.348 "unmap": false, 00:18:14.348 "flush": false, 00:18:14.348 "reset": true, 00:18:14.348 "nvme_admin": false, 00:18:14.348 "nvme_io": false, 00:18:14.348 "nvme_io_md": false, 00:18:14.348 "write_zeroes": true, 00:18:14.348 "zcopy": false, 00:18:14.348 "get_zone_info": false, 00:18:14.348 "zone_management": false, 00:18:14.348 "zone_append": false, 00:18:14.348 "compare": false, 00:18:14.348 "compare_and_write": false, 00:18:14.348 "abort": false, 00:18:14.348 "seek_hole": false, 00:18:14.348 "seek_data": false, 00:18:14.348 "copy": false, 00:18:14.348 "nvme_iov_md": false 00:18:14.348 }, 00:18:14.348 "memory_domains": [ 00:18:14.348 { 00:18:14.348 "dma_device_id": "system", 00:18:14.348 "dma_device_type": 1 00:18:14.348 }, 00:18:14.348 { 00:18:14.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.348 "dma_device_type": 2 00:18:14.348 }, 00:18:14.348 { 00:18:14.348 "dma_device_id": "system", 00:18:14.348 "dma_device_type": 1 00:18:14.348 }, 00:18:14.348 { 00:18:14.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.348 "dma_device_type": 2 00:18:14.348 } 00:18:14.348 ], 00:18:14.348 "driver_specific": { 00:18:14.348 "raid": { 00:18:14.348 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:14.348 "strip_size_kb": 0, 00:18:14.348 "state": "online", 00:18:14.348 "raid_level": "raid1", 00:18:14.348 "superblock": true, 00:18:14.348 "num_base_bdevs": 2, 00:18:14.348 "num_base_bdevs_discovered": 2, 00:18:14.348 "num_base_bdevs_operational": 2, 00:18:14.348 "base_bdevs_list": [ 00:18:14.348 { 00:18:14.348 "name": "pt1", 00:18:14.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.348 "is_configured": true, 00:18:14.348 "data_offset": 256, 00:18:14.348 "data_size": 7936 00:18:14.348 }, 00:18:14.348 { 00:18:14.348 "name": "pt2", 00:18:14.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.348 "is_configured": true, 00:18:14.348 "data_offset": 256, 00:18:14.348 "data_size": 7936 00:18:14.348 } 00:18:14.348 ] 00:18:14.348 } 00:18:14.348 } 00:18:14.348 }' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:14.348 pt2' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:14.348 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.349 [2024-10-25 17:59:32.727312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a542acf6-563a-4d2d-b052-e7559c336138 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a542acf6-563a-4d2d-b052-e7559c336138 ']' 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.349 [2024-10-25 17:59:32.774949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.349 [2024-10-25 17:59:32.774984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.349 [2024-10-25 17:59:32.775094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.349 [2024-10-25 17:59:32.775172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.349 [2024-10-25 17:59:32.775188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.349 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 [2024-10-25 17:59:32.922744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:14.610 [2024-10-25 17:59:32.924914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:14.610 [2024-10-25 17:59:32.925018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:14.610 [2024-10-25 17:59:32.925084] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:14.610 [2024-10-25 17:59:32.925102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.610 [2024-10-25 17:59:32.925114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:14.610 request: 00:18:14.610 { 00:18:14.610 "name": "raid_bdev1", 00:18:14.610 "raid_level": "raid1", 00:18:14.610 "base_bdevs": [ 00:18:14.610 "malloc1", 00:18:14.610 "malloc2" 00:18:14.610 ], 00:18:14.610 "superblock": false, 00:18:14.610 "method": "bdev_raid_create", 00:18:14.610 "req_id": 1 00:18:14.610 } 00:18:14.610 Got JSON-RPC error response 00:18:14.610 response: 00:18:14.610 { 00:18:14.610 "code": -17, 00:18:14.610 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:14.610 } 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.610 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.611 [2024-10-25 17:59:32.990595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.611 [2024-10-25 17:59:32.990664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.611 [2024-10-25 17:59:32.990684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:14.611 [2024-10-25 17:59:32.990697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.611 [2024-10-25 17:59:32.992896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.611 [2024-10-25 17:59:32.992942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.611 [2024-10-25 17:59:32.993007] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.611 [2024-10-25 17:59:32.993080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.611 pt1 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.611 17:59:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.611 "name": "raid_bdev1", 00:18:14.611 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:14.611 "strip_size_kb": 0, 00:18:14.611 "state": "configuring", 00:18:14.611 "raid_level": "raid1", 00:18:14.611 "superblock": true, 00:18:14.611 "num_base_bdevs": 2, 00:18:14.611 "num_base_bdevs_discovered": 1, 00:18:14.611 "num_base_bdevs_operational": 2, 00:18:14.611 "base_bdevs_list": [ 00:18:14.611 { 00:18:14.611 "name": "pt1", 00:18:14.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.611 "is_configured": true, 00:18:14.611 "data_offset": 256, 00:18:14.611 "data_size": 7936 00:18:14.611 }, 00:18:14.611 { 00:18:14.611 "name": null, 00:18:14.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.611 "is_configured": false, 00:18:14.611 "data_offset": 256, 00:18:14.611 "data_size": 7936 00:18:14.611 } 00:18:14.611 ] 00:18:14.611 }' 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.611 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.181 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.181 [2024-10-25 17:59:33.485879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.181 [2024-10-25 17:59:33.485960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.181 [2024-10-25 17:59:33.485987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:15.181 [2024-10-25 17:59:33.486002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.181 [2024-10-25 17:59:33.486199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.181 [2024-10-25 17:59:33.486222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.181 [2024-10-25 17:59:33.486283] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.181 [2024-10-25 17:59:33.486316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.181 [2024-10-25 17:59:33.486416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.181 [2024-10-25 17:59:33.486435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:15.181 [2024-10-25 17:59:33.486521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:15.181 [2024-10-25 17:59:33.486609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.181 [2024-10-25 17:59:33.486623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:15.181 [2024-10-25 17:59:33.486701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.181 pt2 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.182 "name": "raid_bdev1", 00:18:15.182 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:15.182 "strip_size_kb": 0, 00:18:15.182 "state": "online", 00:18:15.182 "raid_level": "raid1", 00:18:15.182 "superblock": true, 00:18:15.182 "num_base_bdevs": 2, 00:18:15.182 "num_base_bdevs_discovered": 2, 00:18:15.182 "num_base_bdevs_operational": 2, 00:18:15.182 "base_bdevs_list": [ 00:18:15.182 { 00:18:15.182 "name": "pt1", 00:18:15.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.182 "is_configured": true, 00:18:15.182 "data_offset": 256, 00:18:15.182 "data_size": 7936 00:18:15.182 }, 00:18:15.182 { 00:18:15.182 "name": "pt2", 00:18:15.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.182 "is_configured": true, 00:18:15.182 "data_offset": 256, 00:18:15.182 "data_size": 7936 00:18:15.182 } 00:18:15.182 ] 00:18:15.182 }' 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.182 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.752 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.753 [2024-10-25 17:59:33.945429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.753 "name": "raid_bdev1", 00:18:15.753 "aliases": [ 00:18:15.753 "a542acf6-563a-4d2d-b052-e7559c336138" 00:18:15.753 ], 00:18:15.753 "product_name": "Raid Volume", 00:18:15.753 "block_size": 4128, 00:18:15.753 "num_blocks": 7936, 00:18:15.753 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:15.753 "md_size": 32, 00:18:15.753 "md_interleave": true, 00:18:15.753 "dif_type": 0, 00:18:15.753 "assigned_rate_limits": { 00:18:15.753 "rw_ios_per_sec": 0, 00:18:15.753 "rw_mbytes_per_sec": 0, 00:18:15.753 "r_mbytes_per_sec": 0, 00:18:15.753 "w_mbytes_per_sec": 0 00:18:15.753 }, 00:18:15.753 "claimed": false, 00:18:15.753 "zoned": false, 00:18:15.753 "supported_io_types": { 00:18:15.753 "read": true, 00:18:15.753 "write": true, 00:18:15.753 "unmap": false, 00:18:15.753 "flush": false, 00:18:15.753 "reset": true, 00:18:15.753 "nvme_admin": false, 00:18:15.753 "nvme_io": false, 00:18:15.753 "nvme_io_md": false, 00:18:15.753 "write_zeroes": true, 00:18:15.753 "zcopy": false, 00:18:15.753 "get_zone_info": false, 00:18:15.753 "zone_management": false, 00:18:15.753 "zone_append": false, 00:18:15.753 "compare": false, 00:18:15.753 "compare_and_write": false, 00:18:15.753 "abort": false, 00:18:15.753 "seek_hole": false, 00:18:15.753 "seek_data": false, 00:18:15.753 "copy": false, 00:18:15.753 "nvme_iov_md": false 00:18:15.753 }, 00:18:15.753 "memory_domains": [ 00:18:15.753 { 00:18:15.753 "dma_device_id": "system", 00:18:15.753 "dma_device_type": 1 00:18:15.753 }, 00:18:15.753 { 00:18:15.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.753 "dma_device_type": 2 00:18:15.753 }, 00:18:15.753 { 00:18:15.753 "dma_device_id": "system", 00:18:15.753 "dma_device_type": 1 00:18:15.753 }, 00:18:15.753 { 00:18:15.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.753 "dma_device_type": 2 00:18:15.753 } 00:18:15.753 ], 00:18:15.753 "driver_specific": { 00:18:15.753 "raid": { 00:18:15.753 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:15.753 "strip_size_kb": 0, 00:18:15.753 "state": "online", 00:18:15.753 "raid_level": "raid1", 00:18:15.753 "superblock": true, 00:18:15.753 "num_base_bdevs": 2, 00:18:15.753 "num_base_bdevs_discovered": 2, 00:18:15.753 "num_base_bdevs_operational": 2, 00:18:15.753 "base_bdevs_list": [ 00:18:15.753 { 00:18:15.753 "name": "pt1", 00:18:15.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.753 "is_configured": true, 00:18:15.753 "data_offset": 256, 00:18:15.753 "data_size": 7936 00:18:15.753 }, 00:18:15.753 { 00:18:15.753 "name": "pt2", 00:18:15.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.753 "is_configured": true, 00:18:15.753 "data_offset": 256, 00:18:15.753 "data_size": 7936 00:18:15.753 } 00:18:15.753 ] 00:18:15.753 } 00:18:15.753 } 00:18:15.753 }' 00:18:15.753 17:59:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:15.753 pt2' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.753 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.753 [2024-10-25 17:59:34.177063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a542acf6-563a-4d2d-b052-e7559c336138 '!=' a542acf6-563a-4d2d-b052-e7559c336138 ']' 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 [2024-10-25 17:59:34.220715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.014 "name": "raid_bdev1", 00:18:16.014 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:16.014 "strip_size_kb": 0, 00:18:16.014 "state": "online", 00:18:16.014 "raid_level": "raid1", 00:18:16.014 "superblock": true, 00:18:16.014 "num_base_bdevs": 2, 00:18:16.014 "num_base_bdevs_discovered": 1, 00:18:16.014 "num_base_bdevs_operational": 1, 00:18:16.014 "base_bdevs_list": [ 00:18:16.014 { 00:18:16.014 "name": null, 00:18:16.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.014 "is_configured": false, 00:18:16.014 "data_offset": 0, 00:18:16.014 "data_size": 7936 00:18:16.014 }, 00:18:16.014 { 00:18:16.014 "name": "pt2", 00:18:16.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.014 "is_configured": true, 00:18:16.014 "data_offset": 256, 00:18:16.014 "data_size": 7936 00:18:16.014 } 00:18:16.014 ] 00:18:16.014 }' 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.014 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.274 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.274 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.274 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.274 [2024-10-25 17:59:34.696114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.275 [2024-10-25 17:59:34.696168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.275 [2024-10-25 17:59:34.696290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.275 [2024-10-25 17:59:34.696354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.275 [2024-10-25 17:59:34.696369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:16.275 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.275 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:16.275 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.275 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.275 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.535 [2024-10-25 17:59:34.760038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.535 [2024-10-25 17:59:34.760137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.535 [2024-10-25 17:59:34.760160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:16.535 [2024-10-25 17:59:34.760175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.535 [2024-10-25 17:59:34.762538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.535 [2024-10-25 17:59:34.762582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.535 [2024-10-25 17:59:34.762659] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:16.535 [2024-10-25 17:59:34.762728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.535 [2024-10-25 17:59:34.762813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:16.535 [2024-10-25 17:59:34.762848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:16.535 [2024-10-25 17:59:34.762959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.535 [2024-10-25 17:59:34.763044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:16.535 [2024-10-25 17:59:34.763057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:16.535 [2024-10-25 17:59:34.763139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.535 pt2 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.535 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.536 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.536 "name": "raid_bdev1", 00:18:16.536 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:16.536 "strip_size_kb": 0, 00:18:16.536 "state": "online", 00:18:16.536 "raid_level": "raid1", 00:18:16.536 "superblock": true, 00:18:16.536 "num_base_bdevs": 2, 00:18:16.536 "num_base_bdevs_discovered": 1, 00:18:16.536 "num_base_bdevs_operational": 1, 00:18:16.536 "base_bdevs_list": [ 00:18:16.536 { 00:18:16.536 "name": null, 00:18:16.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.536 "is_configured": false, 00:18:16.536 "data_offset": 256, 00:18:16.536 "data_size": 7936 00:18:16.536 }, 00:18:16.536 { 00:18:16.536 "name": "pt2", 00:18:16.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.536 "is_configured": true, 00:18:16.536 "data_offset": 256, 00:18:16.536 "data_size": 7936 00:18:16.536 } 00:18:16.536 ] 00:18:16.536 }' 00:18:16.536 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.536 17:59:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.796 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.056 [2024-10-25 17:59:35.235127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.056 [2024-10-25 17:59:35.235166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.056 [2024-10-25 17:59:35.235275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.056 [2024-10-25 17:59:35.235346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.056 [2024-10-25 17:59:35.235361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.056 [2024-10-25 17:59:35.287069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.056 [2024-10-25 17:59:35.287148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.056 [2024-10-25 17:59:35.287178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:17.056 [2024-10-25 17:59:35.287192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.056 [2024-10-25 17:59:35.289404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.056 [2024-10-25 17:59:35.289441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.056 [2024-10-25 17:59:35.289510] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:17.056 [2024-10-25 17:59:35.289572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.056 [2024-10-25 17:59:35.289685] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:17.056 [2024-10-25 17:59:35.289700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.056 [2024-10-25 17:59:35.289722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:17.056 [2024-10-25 17:59:35.289792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.056 [2024-10-25 17:59:35.289889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:17.056 [2024-10-25 17:59:35.289903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:17.056 [2024-10-25 17:59:35.289978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.056 [2024-10-25 17:59:35.290058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:17.056 [2024-10-25 17:59:35.290076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:17.056 [2024-10-25 17:59:35.290156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.056 pt1 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.056 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.057 "name": "raid_bdev1", 00:18:17.057 "uuid": "a542acf6-563a-4d2d-b052-e7559c336138", 00:18:17.057 "strip_size_kb": 0, 00:18:17.057 "state": "online", 00:18:17.057 "raid_level": "raid1", 00:18:17.057 "superblock": true, 00:18:17.057 "num_base_bdevs": 2, 00:18:17.057 "num_base_bdevs_discovered": 1, 00:18:17.057 "num_base_bdevs_operational": 1, 00:18:17.057 "base_bdevs_list": [ 00:18:17.057 { 00:18:17.057 "name": null, 00:18:17.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.057 "is_configured": false, 00:18:17.057 "data_offset": 256, 00:18:17.057 "data_size": 7936 00:18:17.057 }, 00:18:17.057 { 00:18:17.057 "name": "pt2", 00:18:17.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.057 "is_configured": true, 00:18:17.057 "data_offset": 256, 00:18:17.057 "data_size": 7936 00:18:17.057 } 00:18:17.057 ] 00:18:17.057 }' 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.057 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.316 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:17.316 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:17.316 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.316 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:17.575 [2024-10-25 17:59:35.782548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a542acf6-563a-4d2d-b052-e7559c336138 '!=' a542acf6-563a-4d2d-b052-e7559c336138 ']' 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88636 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88636 ']' 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88636 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88636 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88636' 00:18:17.575 killing process with pid 88636 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88636 00:18:17.575 [2024-10-25 17:59:35.867268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.575 17:59:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88636 00:18:17.575 [2024-10-25 17:59:35.867427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.575 [2024-10-25 17:59:35.867507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.575 [2024-10-25 17:59:35.867539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:17.834 [2024-10-25 17:59:36.121000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.234 17:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:19.234 00:18:19.234 real 0m6.558s 00:18:19.234 user 0m9.876s 00:18:19.234 sys 0m1.139s 00:18:19.234 17:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.234 17:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.234 ************************************ 00:18:19.234 END TEST raid_superblock_test_md_interleaved 00:18:19.234 ************************************ 00:18:19.234 17:59:37 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:19.234 17:59:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:19.234 17:59:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.234 17:59:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.234 ************************************ 00:18:19.234 START TEST raid_rebuild_test_sb_md_interleaved 00:18:19.234 ************************************ 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:19.234 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88966 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88966 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88966 ']' 00:18:19.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.235 17:59:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.235 [2024-10-25 17:59:37.614985] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:19.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:19.235 Zero copy mechanism will not be used. 00:18:19.235 [2024-10-25 17:59:37.615189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88966 ] 00:18:19.494 [2024-10-25 17:59:37.776961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.494 [2024-10-25 17:59:37.914245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.753 [2024-10-25 17:59:38.154880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.753 [2024-10-25 17:59:38.154926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.322 BaseBdev1_malloc 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.322 [2024-10-25 17:59:38.576323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.322 [2024-10-25 17:59:38.576449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.322 [2024-10-25 17:59:38.576498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.322 [2024-10-25 17:59:38.576548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.322 [2024-10-25 17:59:38.578735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.322 [2024-10-25 17:59:38.578835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.322 BaseBdev1 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.322 BaseBdev2_malloc 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.322 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 [2024-10-25 17:59:38.641638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:20.323 [2024-10-25 17:59:38.641768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.323 [2024-10-25 17:59:38.641817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:20.323 [2024-10-25 17:59:38.641881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.323 [2024-10-25 17:59:38.644072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.323 [2024-10-25 17:59:38.644154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:20.323 BaseBdev2 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 spare_malloc 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 spare_delay 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 [2024-10-25 17:59:38.734193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.323 [2024-10-25 17:59:38.734266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.323 [2024-10-25 17:59:38.734292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.323 [2024-10-25 17:59:38.734305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.323 [2024-10-25 17:59:38.736490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.323 [2024-10-25 17:59:38.736601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.323 spare 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 [2024-10-25 17:59:38.746221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.323 [2024-10-25 17:59:38.748422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.323 [2024-10-25 17:59:38.748714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:20.323 [2024-10-25 17:59:38.748776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.323 [2024-10-25 17:59:38.748923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:20.323 [2024-10-25 17:59:38.749052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:20.323 [2024-10-25 17:59:38.749092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:20.323 [2024-10-25 17:59:38.749220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.323 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.583 "name": "raid_bdev1", 00:18:20.583 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:20.583 "strip_size_kb": 0, 00:18:20.583 "state": "online", 00:18:20.583 "raid_level": "raid1", 00:18:20.583 "superblock": true, 00:18:20.583 "num_base_bdevs": 2, 00:18:20.583 "num_base_bdevs_discovered": 2, 00:18:20.583 "num_base_bdevs_operational": 2, 00:18:20.583 "base_bdevs_list": [ 00:18:20.583 { 00:18:20.583 "name": "BaseBdev1", 00:18:20.583 "uuid": "1f13b437-3d5c-575a-bdd9-3deb0f38c653", 00:18:20.583 "is_configured": true, 00:18:20.583 "data_offset": 256, 00:18:20.583 "data_size": 7936 00:18:20.583 }, 00:18:20.583 { 00:18:20.583 "name": "BaseBdev2", 00:18:20.583 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:20.583 "is_configured": true, 00:18:20.583 "data_offset": 256, 00:18:20.583 "data_size": 7936 00:18:20.583 } 00:18:20.583 ] 00:18:20.583 }' 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.583 17:59:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 [2024-10-25 17:59:39.217870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.103 [2024-10-25 17:59:39.301353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.103 "name": "raid_bdev1", 00:18:21.103 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:21.103 "strip_size_kb": 0, 00:18:21.103 "state": "online", 00:18:21.103 "raid_level": "raid1", 00:18:21.103 "superblock": true, 00:18:21.103 "num_base_bdevs": 2, 00:18:21.103 "num_base_bdevs_discovered": 1, 00:18:21.103 "num_base_bdevs_operational": 1, 00:18:21.103 "base_bdevs_list": [ 00:18:21.103 { 00:18:21.103 "name": null, 00:18:21.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.103 "is_configured": false, 00:18:21.103 "data_offset": 0, 00:18:21.103 "data_size": 7936 00:18:21.103 }, 00:18:21.103 { 00:18:21.103 "name": "BaseBdev2", 00:18:21.103 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:21.103 "is_configured": true, 00:18:21.103 "data_offset": 256, 00:18:21.103 "data_size": 7936 00:18:21.103 } 00:18:21.103 ] 00:18:21.103 }' 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.103 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.363 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.363 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.363 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.363 [2024-10-25 17:59:39.752713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.363 [2024-10-25 17:59:39.774549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:21.363 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.363 17:59:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:21.363 [2024-10-25 17:59:39.776802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.742 "name": "raid_bdev1", 00:18:22.742 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:22.742 "strip_size_kb": 0, 00:18:22.742 "state": "online", 00:18:22.742 "raid_level": "raid1", 00:18:22.742 "superblock": true, 00:18:22.742 "num_base_bdevs": 2, 00:18:22.742 "num_base_bdevs_discovered": 2, 00:18:22.742 "num_base_bdevs_operational": 2, 00:18:22.742 "process": { 00:18:22.742 "type": "rebuild", 00:18:22.742 "target": "spare", 00:18:22.742 "progress": { 00:18:22.742 "blocks": 2560, 00:18:22.742 "percent": 32 00:18:22.742 } 00:18:22.742 }, 00:18:22.742 "base_bdevs_list": [ 00:18:22.742 { 00:18:22.742 "name": "spare", 00:18:22.742 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:22.742 "is_configured": true, 00:18:22.742 "data_offset": 256, 00:18:22.742 "data_size": 7936 00:18:22.742 }, 00:18:22.742 { 00:18:22.742 "name": "BaseBdev2", 00:18:22.742 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:22.742 "is_configured": true, 00:18:22.742 "data_offset": 256, 00:18:22.742 "data_size": 7936 00:18:22.742 } 00:18:22.742 ] 00:18:22.742 }' 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.742 17:59:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.742 [2024-10-25 17:59:40.928320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.742 [2024-10-25 17:59:40.983275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.742 [2024-10-25 17:59:40.983437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.742 [2024-10-25 17:59:40.983483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.742 [2024-10-25 17:59:40.983528] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.742 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.743 "name": "raid_bdev1", 00:18:22.743 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:22.743 "strip_size_kb": 0, 00:18:22.743 "state": "online", 00:18:22.743 "raid_level": "raid1", 00:18:22.743 "superblock": true, 00:18:22.743 "num_base_bdevs": 2, 00:18:22.743 "num_base_bdevs_discovered": 1, 00:18:22.743 "num_base_bdevs_operational": 1, 00:18:22.743 "base_bdevs_list": [ 00:18:22.743 { 00:18:22.743 "name": null, 00:18:22.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.743 "is_configured": false, 00:18:22.743 "data_offset": 0, 00:18:22.743 "data_size": 7936 00:18:22.743 }, 00:18:22.743 { 00:18:22.743 "name": "BaseBdev2", 00:18:22.743 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:22.743 "is_configured": true, 00:18:22.743 "data_offset": 256, 00:18:22.743 "data_size": 7936 00:18:22.743 } 00:18:22.743 ] 00:18:22.743 }' 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.743 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.311 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.311 "name": "raid_bdev1", 00:18:23.311 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:23.311 "strip_size_kb": 0, 00:18:23.311 "state": "online", 00:18:23.311 "raid_level": "raid1", 00:18:23.311 "superblock": true, 00:18:23.311 "num_base_bdevs": 2, 00:18:23.311 "num_base_bdevs_discovered": 1, 00:18:23.311 "num_base_bdevs_operational": 1, 00:18:23.311 "base_bdevs_list": [ 00:18:23.311 { 00:18:23.311 "name": null, 00:18:23.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.311 "is_configured": false, 00:18:23.311 "data_offset": 0, 00:18:23.311 "data_size": 7936 00:18:23.311 }, 00:18:23.311 { 00:18:23.311 "name": "BaseBdev2", 00:18:23.311 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:23.311 "is_configured": true, 00:18:23.311 "data_offset": 256, 00:18:23.312 "data_size": 7936 00:18:23.312 } 00:18:23.312 ] 00:18:23.312 }' 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.312 [2024-10-25 17:59:41.655923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.312 [2024-10-25 17:59:41.675847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.312 17:59:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:23.312 [2024-10-25 17:59:41.678109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.250 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.250 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.251 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.251 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.251 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.510 "name": "raid_bdev1", 00:18:24.510 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:24.510 "strip_size_kb": 0, 00:18:24.510 "state": "online", 00:18:24.510 "raid_level": "raid1", 00:18:24.510 "superblock": true, 00:18:24.510 "num_base_bdevs": 2, 00:18:24.510 "num_base_bdevs_discovered": 2, 00:18:24.510 "num_base_bdevs_operational": 2, 00:18:24.510 "process": { 00:18:24.510 "type": "rebuild", 00:18:24.510 "target": "spare", 00:18:24.510 "progress": { 00:18:24.510 "blocks": 2560, 00:18:24.510 "percent": 32 00:18:24.510 } 00:18:24.510 }, 00:18:24.510 "base_bdevs_list": [ 00:18:24.510 { 00:18:24.510 "name": "spare", 00:18:24.510 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:24.510 "is_configured": true, 00:18:24.510 "data_offset": 256, 00:18:24.510 "data_size": 7936 00:18:24.510 }, 00:18:24.510 { 00:18:24.510 "name": "BaseBdev2", 00:18:24.510 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:24.510 "is_configured": true, 00:18:24.510 "data_offset": 256, 00:18:24.510 "data_size": 7936 00:18:24.510 } 00:18:24.510 ] 00:18:24.510 }' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:24.510 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.510 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.511 "name": "raid_bdev1", 00:18:24.511 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:24.511 "strip_size_kb": 0, 00:18:24.511 "state": "online", 00:18:24.511 "raid_level": "raid1", 00:18:24.511 "superblock": true, 00:18:24.511 "num_base_bdevs": 2, 00:18:24.511 "num_base_bdevs_discovered": 2, 00:18:24.511 "num_base_bdevs_operational": 2, 00:18:24.511 "process": { 00:18:24.511 "type": "rebuild", 00:18:24.511 "target": "spare", 00:18:24.511 "progress": { 00:18:24.511 "blocks": 2816, 00:18:24.511 "percent": 35 00:18:24.511 } 00:18:24.511 }, 00:18:24.511 "base_bdevs_list": [ 00:18:24.511 { 00:18:24.511 "name": "spare", 00:18:24.511 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:24.511 "is_configured": true, 00:18:24.511 "data_offset": 256, 00:18:24.511 "data_size": 7936 00:18:24.511 }, 00:18:24.511 { 00:18:24.511 "name": "BaseBdev2", 00:18:24.511 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:24.511 "is_configured": true, 00:18:24.511 "data_offset": 256, 00:18:24.511 "data_size": 7936 00:18:24.511 } 00:18:24.511 ] 00:18:24.511 }' 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.511 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.771 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.771 17:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.729 17:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.729 "name": "raid_bdev1", 00:18:25.729 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:25.729 "strip_size_kb": 0, 00:18:25.729 "state": "online", 00:18:25.729 "raid_level": "raid1", 00:18:25.729 "superblock": true, 00:18:25.729 "num_base_bdevs": 2, 00:18:25.729 "num_base_bdevs_discovered": 2, 00:18:25.729 "num_base_bdevs_operational": 2, 00:18:25.729 "process": { 00:18:25.729 "type": "rebuild", 00:18:25.729 "target": "spare", 00:18:25.729 "progress": { 00:18:25.729 "blocks": 5632, 00:18:25.729 "percent": 70 00:18:25.729 } 00:18:25.729 }, 00:18:25.729 "base_bdevs_list": [ 00:18:25.729 { 00:18:25.729 "name": "spare", 00:18:25.729 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:25.729 "is_configured": true, 00:18:25.729 "data_offset": 256, 00:18:25.729 "data_size": 7936 00:18:25.729 }, 00:18:25.729 { 00:18:25.729 "name": "BaseBdev2", 00:18:25.729 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:25.729 "is_configured": true, 00:18:25.729 "data_offset": 256, 00:18:25.729 "data_size": 7936 00:18:25.729 } 00:18:25.729 ] 00:18:25.729 }' 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.729 17:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.666 [2024-10-25 17:59:44.794422] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:26.666 [2024-10-25 17:59:44.794610] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:26.666 [2024-10-25 17:59:44.794794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.926 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.926 "name": "raid_bdev1", 00:18:26.926 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:26.926 "strip_size_kb": 0, 00:18:26.926 "state": "online", 00:18:26.926 "raid_level": "raid1", 00:18:26.926 "superblock": true, 00:18:26.926 "num_base_bdevs": 2, 00:18:26.926 "num_base_bdevs_discovered": 2, 00:18:26.926 "num_base_bdevs_operational": 2, 00:18:26.926 "base_bdevs_list": [ 00:18:26.927 { 00:18:26.927 "name": "spare", 00:18:26.927 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:26.927 "is_configured": true, 00:18:26.927 "data_offset": 256, 00:18:26.927 "data_size": 7936 00:18:26.927 }, 00:18:26.927 { 00:18:26.927 "name": "BaseBdev2", 00:18:26.927 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:26.927 "is_configured": true, 00:18:26.927 "data_offset": 256, 00:18:26.927 "data_size": 7936 00:18:26.927 } 00:18:26.927 ] 00:18:26.927 }' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.927 "name": "raid_bdev1", 00:18:26.927 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:26.927 "strip_size_kb": 0, 00:18:26.927 "state": "online", 00:18:26.927 "raid_level": "raid1", 00:18:26.927 "superblock": true, 00:18:26.927 "num_base_bdevs": 2, 00:18:26.927 "num_base_bdevs_discovered": 2, 00:18:26.927 "num_base_bdevs_operational": 2, 00:18:26.927 "base_bdevs_list": [ 00:18:26.927 { 00:18:26.927 "name": "spare", 00:18:26.927 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:26.927 "is_configured": true, 00:18:26.927 "data_offset": 256, 00:18:26.927 "data_size": 7936 00:18:26.927 }, 00:18:26.927 { 00:18:26.927 "name": "BaseBdev2", 00:18:26.927 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:26.927 "is_configured": true, 00:18:26.927 "data_offset": 256, 00:18:26.927 "data_size": 7936 00:18:26.927 } 00:18:26.927 ] 00:18:26.927 }' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.927 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.187 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.187 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.187 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.188 "name": "raid_bdev1", 00:18:27.188 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:27.188 "strip_size_kb": 0, 00:18:27.188 "state": "online", 00:18:27.188 "raid_level": "raid1", 00:18:27.188 "superblock": true, 00:18:27.188 "num_base_bdevs": 2, 00:18:27.188 "num_base_bdevs_discovered": 2, 00:18:27.188 "num_base_bdevs_operational": 2, 00:18:27.188 "base_bdevs_list": [ 00:18:27.188 { 00:18:27.188 "name": "spare", 00:18:27.188 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:27.188 "is_configured": true, 00:18:27.188 "data_offset": 256, 00:18:27.188 "data_size": 7936 00:18:27.188 }, 00:18:27.188 { 00:18:27.188 "name": "BaseBdev2", 00:18:27.188 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:27.188 "is_configured": true, 00:18:27.188 "data_offset": 256, 00:18:27.188 "data_size": 7936 00:18:27.188 } 00:18:27.188 ] 00:18:27.188 }' 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.188 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.448 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.448 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.448 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 [2024-10-25 17:59:45.889372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.708 [2024-10-25 17:59:45.889486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.708 [2024-10-25 17:59:45.889629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.708 [2024-10-25 17:59:45.889748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.708 [2024-10-25 17:59:45.889805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 [2024-10-25 17:59:45.961242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.708 [2024-10-25 17:59:45.961321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.708 [2024-10-25 17:59:45.961349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:27.708 [2024-10-25 17:59:45.961361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.708 [2024-10-25 17:59:45.963745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.708 [2024-10-25 17:59:45.963847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.708 [2024-10-25 17:59:45.963934] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:27.708 [2024-10-25 17:59:45.964006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.708 [2024-10-25 17:59:45.964142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.708 spare 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.708 17:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 [2024-10-25 17:59:46.064067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:27.708 [2024-10-25 17:59:46.064148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.708 [2024-10-25 17:59:46.064315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:27.708 [2024-10-25 17:59:46.064454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:27.708 [2024-10-25 17:59:46.064464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:27.708 [2024-10-25 17:59:46.064617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.708 "name": "raid_bdev1", 00:18:27.708 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:27.708 "strip_size_kb": 0, 00:18:27.708 "state": "online", 00:18:27.708 "raid_level": "raid1", 00:18:27.708 "superblock": true, 00:18:27.708 "num_base_bdevs": 2, 00:18:27.708 "num_base_bdevs_discovered": 2, 00:18:27.708 "num_base_bdevs_operational": 2, 00:18:27.708 "base_bdevs_list": [ 00:18:27.708 { 00:18:27.708 "name": "spare", 00:18:27.708 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:27.708 "is_configured": true, 00:18:27.708 "data_offset": 256, 00:18:27.708 "data_size": 7936 00:18:27.708 }, 00:18:27.708 { 00:18:27.708 "name": "BaseBdev2", 00:18:27.708 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:27.708 "is_configured": true, 00:18:27.708 "data_offset": 256, 00:18:27.708 "data_size": 7936 00:18:27.708 } 00:18:27.708 ] 00:18:27.708 }' 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.708 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.276 "name": "raid_bdev1", 00:18:28.276 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:28.276 "strip_size_kb": 0, 00:18:28.276 "state": "online", 00:18:28.276 "raid_level": "raid1", 00:18:28.276 "superblock": true, 00:18:28.276 "num_base_bdevs": 2, 00:18:28.276 "num_base_bdevs_discovered": 2, 00:18:28.276 "num_base_bdevs_operational": 2, 00:18:28.276 "base_bdevs_list": [ 00:18:28.276 { 00:18:28.276 "name": "spare", 00:18:28.276 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:28.276 "is_configured": true, 00:18:28.276 "data_offset": 256, 00:18:28.276 "data_size": 7936 00:18:28.276 }, 00:18:28.276 { 00:18:28.276 "name": "BaseBdev2", 00:18:28.276 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:28.276 "is_configured": true, 00:18:28.276 "data_offset": 256, 00:18:28.276 "data_size": 7936 00:18:28.276 } 00:18:28.276 ] 00:18:28.276 }' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 [2024-10-25 17:59:46.684348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.276 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.537 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.537 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.537 "name": "raid_bdev1", 00:18:28.537 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:28.537 "strip_size_kb": 0, 00:18:28.537 "state": "online", 00:18:28.537 "raid_level": "raid1", 00:18:28.537 "superblock": true, 00:18:28.537 "num_base_bdevs": 2, 00:18:28.537 "num_base_bdevs_discovered": 1, 00:18:28.537 "num_base_bdevs_operational": 1, 00:18:28.537 "base_bdevs_list": [ 00:18:28.537 { 00:18:28.537 "name": null, 00:18:28.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.537 "is_configured": false, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 7936 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev2", 00:18:28.537 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 256, 00:18:28.537 "data_size": 7936 00:18:28.537 } 00:18:28.537 ] 00:18:28.537 }' 00:18:28.537 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.537 17:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.796 17:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.796 17:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.796 17:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.796 [2024-10-25 17:59:47.163586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.796 [2024-10-25 17:59:47.163900] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.796 [2024-10-25 17:59:47.163979] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:28.796 [2024-10-25 17:59:47.164054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.796 [2024-10-25 17:59:47.183368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:28.796 17:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.796 17:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:28.796 [2024-10-25 17:59:47.185630] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.176 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.176 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.176 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.176 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.177 "name": "raid_bdev1", 00:18:30.177 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:30.177 "strip_size_kb": 0, 00:18:30.177 "state": "online", 00:18:30.177 "raid_level": "raid1", 00:18:30.177 "superblock": true, 00:18:30.177 "num_base_bdevs": 2, 00:18:30.177 "num_base_bdevs_discovered": 2, 00:18:30.177 "num_base_bdevs_operational": 2, 00:18:30.177 "process": { 00:18:30.177 "type": "rebuild", 00:18:30.177 "target": "spare", 00:18:30.177 "progress": { 00:18:30.177 "blocks": 2560, 00:18:30.177 "percent": 32 00:18:30.177 } 00:18:30.177 }, 00:18:30.177 "base_bdevs_list": [ 00:18:30.177 { 00:18:30.177 "name": "spare", 00:18:30.177 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:30.177 "is_configured": true, 00:18:30.177 "data_offset": 256, 00:18:30.177 "data_size": 7936 00:18:30.177 }, 00:18:30.177 { 00:18:30.177 "name": "BaseBdev2", 00:18:30.177 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:30.177 "is_configured": true, 00:18:30.177 "data_offset": 256, 00:18:30.177 "data_size": 7936 00:18:30.177 } 00:18:30.177 ] 00:18:30.177 }' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.177 [2024-10-25 17:59:48.333036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.177 [2024-10-25 17:59:48.391887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.177 [2024-10-25 17:59:48.391962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.177 [2024-10-25 17:59:48.391981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.177 [2024-10-25 17:59:48.391992] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.177 "name": "raid_bdev1", 00:18:30.177 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:30.177 "strip_size_kb": 0, 00:18:30.177 "state": "online", 00:18:30.177 "raid_level": "raid1", 00:18:30.177 "superblock": true, 00:18:30.177 "num_base_bdevs": 2, 00:18:30.177 "num_base_bdevs_discovered": 1, 00:18:30.177 "num_base_bdevs_operational": 1, 00:18:30.177 "base_bdevs_list": [ 00:18:30.177 { 00:18:30.177 "name": null, 00:18:30.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.177 "is_configured": false, 00:18:30.177 "data_offset": 0, 00:18:30.177 "data_size": 7936 00:18:30.177 }, 00:18:30.177 { 00:18:30.177 "name": "BaseBdev2", 00:18:30.177 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:30.177 "is_configured": true, 00:18:30.177 "data_offset": 256, 00:18:30.177 "data_size": 7936 00:18:30.177 } 00:18:30.177 ] 00:18:30.177 }' 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.177 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.746 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.746 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.746 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.746 [2024-10-25 17:59:48.896218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.746 [2024-10-25 17:59:48.896347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.746 [2024-10-25 17:59:48.896398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:30.746 [2024-10-25 17:59:48.896439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.746 [2024-10-25 17:59:48.896714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.746 [2024-10-25 17:59:48.896783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.746 [2024-10-25 17:59:48.896895] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:30.746 [2024-10-25 17:59:48.896944] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.746 [2024-10-25 17:59:48.896995] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.746 [2024-10-25 17:59:48.897111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.746 [2024-10-25 17:59:48.916837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:30.746 spare 00:18:30.746 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.746 17:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:30.746 [2024-10-25 17:59:48.919088] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.685 "name": "raid_bdev1", 00:18:31.685 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:31.685 "strip_size_kb": 0, 00:18:31.685 "state": "online", 00:18:31.685 "raid_level": "raid1", 00:18:31.685 "superblock": true, 00:18:31.685 "num_base_bdevs": 2, 00:18:31.685 "num_base_bdevs_discovered": 2, 00:18:31.685 "num_base_bdevs_operational": 2, 00:18:31.685 "process": { 00:18:31.685 "type": "rebuild", 00:18:31.685 "target": "spare", 00:18:31.685 "progress": { 00:18:31.685 "blocks": 2560, 00:18:31.685 "percent": 32 00:18:31.685 } 00:18:31.685 }, 00:18:31.685 "base_bdevs_list": [ 00:18:31.685 { 00:18:31.685 "name": "spare", 00:18:31.685 "uuid": "0a5df287-79f5-5c84-b52b-e97a5e585689", 00:18:31.685 "is_configured": true, 00:18:31.685 "data_offset": 256, 00:18:31.685 "data_size": 7936 00:18:31.685 }, 00:18:31.685 { 00:18:31.685 "name": "BaseBdev2", 00:18:31.685 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:31.685 "is_configured": true, 00:18:31.685 "data_offset": 256, 00:18:31.685 "data_size": 7936 00:18:31.685 } 00:18:31.685 ] 00:18:31.685 }' 00:18:31.685 17:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.685 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.685 [2024-10-25 17:59:50.058110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.945 [2024-10-25 17:59:50.124963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.945 [2024-10-25 17:59:50.125096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.945 [2024-10-25 17:59:50.125123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.945 [2024-10-25 17:59:50.125132] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.945 "name": "raid_bdev1", 00:18:31.945 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:31.945 "strip_size_kb": 0, 00:18:31.945 "state": "online", 00:18:31.945 "raid_level": "raid1", 00:18:31.945 "superblock": true, 00:18:31.945 "num_base_bdevs": 2, 00:18:31.945 "num_base_bdevs_discovered": 1, 00:18:31.945 "num_base_bdevs_operational": 1, 00:18:31.945 "base_bdevs_list": [ 00:18:31.945 { 00:18:31.945 "name": null, 00:18:31.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.945 "is_configured": false, 00:18:31.945 "data_offset": 0, 00:18:31.945 "data_size": 7936 00:18:31.945 }, 00:18:31.945 { 00:18:31.945 "name": "BaseBdev2", 00:18:31.945 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:31.945 "is_configured": true, 00:18:31.945 "data_offset": 256, 00:18:31.945 "data_size": 7936 00:18:31.945 } 00:18:31.945 ] 00:18:31.945 }' 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.945 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.208 "name": "raid_bdev1", 00:18:32.208 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:32.208 "strip_size_kb": 0, 00:18:32.208 "state": "online", 00:18:32.208 "raid_level": "raid1", 00:18:32.208 "superblock": true, 00:18:32.208 "num_base_bdevs": 2, 00:18:32.208 "num_base_bdevs_discovered": 1, 00:18:32.208 "num_base_bdevs_operational": 1, 00:18:32.208 "base_bdevs_list": [ 00:18:32.208 { 00:18:32.208 "name": null, 00:18:32.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.208 "is_configured": false, 00:18:32.208 "data_offset": 0, 00:18:32.208 "data_size": 7936 00:18:32.208 }, 00:18:32.208 { 00:18:32.208 "name": "BaseBdev2", 00:18:32.208 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:32.208 "is_configured": true, 00:18:32.208 "data_offset": 256, 00:18:32.208 "data_size": 7936 00:18:32.208 } 00:18:32.208 ] 00:18:32.208 }' 00:18:32.208 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.494 [2024-10-25 17:59:50.715639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:32.494 [2024-10-25 17:59:50.715708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.494 [2024-10-25 17:59:50.715736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:32.494 [2024-10-25 17:59:50.715747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.494 [2024-10-25 17:59:50.715958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.494 [2024-10-25 17:59:50.715973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.494 [2024-10-25 17:59:50.716034] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:32.494 [2024-10-25 17:59:50.716050] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.494 [2024-10-25 17:59:50.716062] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:32.494 [2024-10-25 17:59:50.716074] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:32.494 BaseBdev1 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.494 17:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.430 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.431 "name": "raid_bdev1", 00:18:33.431 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:33.431 "strip_size_kb": 0, 00:18:33.431 "state": "online", 00:18:33.431 "raid_level": "raid1", 00:18:33.431 "superblock": true, 00:18:33.431 "num_base_bdevs": 2, 00:18:33.431 "num_base_bdevs_discovered": 1, 00:18:33.431 "num_base_bdevs_operational": 1, 00:18:33.431 "base_bdevs_list": [ 00:18:33.431 { 00:18:33.431 "name": null, 00:18:33.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.431 "is_configured": false, 00:18:33.431 "data_offset": 0, 00:18:33.431 "data_size": 7936 00:18:33.431 }, 00:18:33.431 { 00:18:33.431 "name": "BaseBdev2", 00:18:33.431 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:33.431 "is_configured": true, 00:18:33.431 "data_offset": 256, 00:18:33.431 "data_size": 7936 00:18:33.431 } 00:18:33.431 ] 00:18:33.431 }' 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.431 17:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.000 "name": "raid_bdev1", 00:18:34.000 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:34.000 "strip_size_kb": 0, 00:18:34.000 "state": "online", 00:18:34.000 "raid_level": "raid1", 00:18:34.000 "superblock": true, 00:18:34.000 "num_base_bdevs": 2, 00:18:34.000 "num_base_bdevs_discovered": 1, 00:18:34.000 "num_base_bdevs_operational": 1, 00:18:34.000 "base_bdevs_list": [ 00:18:34.000 { 00:18:34.000 "name": null, 00:18:34.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.000 "is_configured": false, 00:18:34.000 "data_offset": 0, 00:18:34.000 "data_size": 7936 00:18:34.000 }, 00:18:34.000 { 00:18:34.000 "name": "BaseBdev2", 00:18:34.000 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:34.000 "is_configured": true, 00:18:34.000 "data_offset": 256, 00:18:34.000 "data_size": 7936 00:18:34.000 } 00:18:34.000 ] 00:18:34.000 }' 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.000 [2024-10-25 17:59:52.385025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.000 [2024-10-25 17:59:52.385270] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.000 [2024-10-25 17:59:52.385296] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:34.000 request: 00:18:34.000 { 00:18:34.000 "base_bdev": "BaseBdev1", 00:18:34.000 "raid_bdev": "raid_bdev1", 00:18:34.000 "method": "bdev_raid_add_base_bdev", 00:18:34.000 "req_id": 1 00:18:34.000 } 00:18:34.000 Got JSON-RPC error response 00:18:34.000 response: 00:18:34.000 { 00:18:34.000 "code": -22, 00:18:34.000 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:34.000 } 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.000 17:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.378 "name": "raid_bdev1", 00:18:35.378 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:35.378 "strip_size_kb": 0, 00:18:35.378 "state": "online", 00:18:35.378 "raid_level": "raid1", 00:18:35.378 "superblock": true, 00:18:35.378 "num_base_bdevs": 2, 00:18:35.378 "num_base_bdevs_discovered": 1, 00:18:35.378 "num_base_bdevs_operational": 1, 00:18:35.378 "base_bdevs_list": [ 00:18:35.378 { 00:18:35.378 "name": null, 00:18:35.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.378 "is_configured": false, 00:18:35.378 "data_offset": 0, 00:18:35.378 "data_size": 7936 00:18:35.378 }, 00:18:35.378 { 00:18:35.378 "name": "BaseBdev2", 00:18:35.378 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:35.378 "is_configured": true, 00:18:35.378 "data_offset": 256, 00:18:35.378 "data_size": 7936 00:18:35.378 } 00:18:35.378 ] 00:18:35.378 }' 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.378 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.638 "name": "raid_bdev1", 00:18:35.638 "uuid": "8a65194e-3c66-4f1d-a06b-f58aec75222c", 00:18:35.638 "strip_size_kb": 0, 00:18:35.638 "state": "online", 00:18:35.638 "raid_level": "raid1", 00:18:35.638 "superblock": true, 00:18:35.638 "num_base_bdevs": 2, 00:18:35.638 "num_base_bdevs_discovered": 1, 00:18:35.638 "num_base_bdevs_operational": 1, 00:18:35.638 "base_bdevs_list": [ 00:18:35.638 { 00:18:35.638 "name": null, 00:18:35.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.638 "is_configured": false, 00:18:35.638 "data_offset": 0, 00:18:35.638 "data_size": 7936 00:18:35.638 }, 00:18:35.638 { 00:18:35.638 "name": "BaseBdev2", 00:18:35.638 "uuid": "bc047d3f-4f04-53d8-926c-d5f4a647cbdb", 00:18:35.638 "is_configured": true, 00:18:35.638 "data_offset": 256, 00:18:35.638 "data_size": 7936 00:18:35.638 } 00:18:35.638 ] 00:18:35.638 }' 00:18:35.638 17:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88966 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88966 ']' 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88966 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.638 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88966 00:18:35.897 killing process with pid 88966 00:18:35.897 Received shutdown signal, test time was about 60.000000 seconds 00:18:35.897 00:18:35.897 Latency(us) 00:18:35.897 [2024-10-25T17:59:54.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.897 [2024-10-25T17:59:54.333Z] =================================================================================================================== 00:18:35.897 [2024-10-25T17:59:54.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.897 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.897 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.897 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88966' 00:18:35.897 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88966 00:18:35.897 [2024-10-25 17:59:54.102940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.897 [2024-10-25 17:59:54.103088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.897 [2024-10-25 17:59:54.103147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.897 [2024-10-25 17:59:54.103161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:35.897 17:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88966 00:18:36.156 [2024-10-25 17:59:54.479444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.535 17:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:37.535 00:18:37.535 real 0m18.313s 00:18:37.535 user 0m24.054s 00:18:37.535 sys 0m1.753s 00:18:37.535 17:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.535 ************************************ 00:18:37.535 END TEST raid_rebuild_test_sb_md_interleaved 00:18:37.535 ************************************ 00:18:37.535 17:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.535 17:59:55 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:37.535 17:59:55 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:37.535 17:59:55 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88966 ']' 00:18:37.535 17:59:55 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88966 00:18:37.535 17:59:55 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:37.535 00:18:37.535 real 12m9.930s 00:18:37.535 user 16m22.550s 00:18:37.535 sys 1m56.744s 00:18:37.535 17:59:55 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.535 ************************************ 00:18:37.535 17:59:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.535 END TEST bdev_raid 00:18:37.535 ************************************ 00:18:37.795 17:59:55 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:37.795 17:59:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:37.795 17:59:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.795 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 ************************************ 00:18:37.795 START TEST spdkcli_raid 00:18:37.795 ************************************ 00:18:37.795 17:59:55 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:37.795 * Looking for test storage... 00:18:37.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:37.795 17:59:56 spdkcli_raid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:37.795 17:59:56 spdkcli_raid -- common/autotest_common.sh@1689 -- # lcov --version 00:18:37.795 17:59:56 spdkcli_raid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:37.795 17:59:56 spdkcli_raid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.795 17:59:56 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:37.795 17:59:56 spdkcli_raid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:37.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.796 --rc genhtml_branch_coverage=1 00:18:37.796 --rc genhtml_function_coverage=1 00:18:37.796 --rc genhtml_legend=1 00:18:37.796 --rc geninfo_all_blocks=1 00:18:37.796 --rc geninfo_unexecuted_blocks=1 00:18:37.796 00:18:37.796 ' 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:37.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.796 --rc genhtml_branch_coverage=1 00:18:37.796 --rc genhtml_function_coverage=1 00:18:37.796 --rc genhtml_legend=1 00:18:37.796 --rc geninfo_all_blocks=1 00:18:37.796 --rc geninfo_unexecuted_blocks=1 00:18:37.796 00:18:37.796 ' 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:37.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.796 --rc genhtml_branch_coverage=1 00:18:37.796 --rc genhtml_function_coverage=1 00:18:37.796 --rc genhtml_legend=1 00:18:37.796 --rc geninfo_all_blocks=1 00:18:37.796 --rc geninfo_unexecuted_blocks=1 00:18:37.796 00:18:37.796 ' 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:37.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.796 --rc genhtml_branch_coverage=1 00:18:37.796 --rc genhtml_function_coverage=1 00:18:37.796 --rc genhtml_legend=1 00:18:37.796 --rc geninfo_all_blocks=1 00:18:37.796 --rc geninfo_unexecuted_blocks=1 00:18:37.796 00:18:37.796 ' 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:37.796 17:59:56 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:37.796 17:59:56 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.796 17:59:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.057 17:59:56 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:38.057 17:59:56 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89651 00:18:38.057 17:59:56 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:38.057 17:59:56 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89651 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89651 ']' 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.057 17:59:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.057 [2024-10-25 17:59:56.343475] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:38.057 [2024-10-25 17:59:56.343693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89651 ] 00:18:38.348 [2024-10-25 17:59:56.523959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:38.348 [2024-10-25 17:59:56.663070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.348 [2024-10-25 17:59:56.663104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.286 17:59:57 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.286 17:59:57 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:39.286 17:59:57 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:39.286 17:59:57 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.286 17:59:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.546 17:59:57 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:39.546 17:59:57 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.546 17:59:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.546 17:59:57 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:39.546 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:39.546 ' 00:18:40.929 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:40.929 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:41.188 17:59:59 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:41.188 17:59:59 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.188 17:59:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.188 17:59:59 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:41.189 17:59:59 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.189 17:59:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.189 17:59:59 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:41.189 ' 00:18:42.566 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:42.566 18:00:00 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:42.566 18:00:00 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:42.566 18:00:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.566 18:00:00 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:42.566 18:00:00 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.566 18:00:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.566 18:00:00 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:42.566 18:00:00 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:43.153 18:00:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:43.153 18:00:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:43.153 18:00:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:43.153 18:00:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.153 18:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.153 18:00:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:43.153 18:00:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.153 18:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.153 18:00:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:43.153 ' 00:18:44.091 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:44.091 18:00:02 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:44.091 18:00:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.091 18:00:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.351 18:00:02 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:44.351 18:00:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.351 18:00:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.351 18:00:02 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:44.351 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:44.351 ' 00:18:45.732 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:45.732 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:45.732 18:00:04 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.732 18:00:04 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89651 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89651 ']' 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89651 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.732 18:00:04 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89651 00:18:45.992 killing process with pid 89651 00:18:45.992 18:00:04 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:45.992 18:00:04 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:45.992 18:00:04 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89651' 00:18:45.992 18:00:04 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89651 00:18:45.992 18:00:04 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89651 00:18:49.278 Process with pid 89651 is not found 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89651 ']' 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89651 00:18:49.278 18:00:07 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89651 ']' 00:18:49.278 18:00:07 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89651 00:18:49.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89651) - No such process 00:18:49.278 18:00:07 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89651 is not found' 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:49.278 18:00:07 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:49.278 ************************************ 00:18:49.278 END TEST spdkcli_raid 00:18:49.278 ************************************ 00:18:49.278 00:18:49.278 real 0m11.286s 00:18:49.278 user 0m23.323s 00:18:49.278 sys 0m1.170s 00:18:49.278 18:00:07 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.278 18:00:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.278 18:00:07 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:49.278 18:00:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.278 18:00:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.278 18:00:07 -- common/autotest_common.sh@10 -- # set +x 00:18:49.278 ************************************ 00:18:49.278 START TEST blockdev_raid5f 00:18:49.278 ************************************ 00:18:49.278 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:49.278 * Looking for test storage... 00:18:49.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:49.278 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:49.278 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1689 -- # lcov --version 00:18:49.278 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:49.278 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.278 18:00:07 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.279 18:00:07 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:49.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.279 --rc genhtml_branch_coverage=1 00:18:49.279 --rc genhtml_function_coverage=1 00:18:49.279 --rc genhtml_legend=1 00:18:49.279 --rc geninfo_all_blocks=1 00:18:49.279 --rc geninfo_unexecuted_blocks=1 00:18:49.279 00:18:49.279 ' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:49.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.279 --rc genhtml_branch_coverage=1 00:18:49.279 --rc genhtml_function_coverage=1 00:18:49.279 --rc genhtml_legend=1 00:18:49.279 --rc geninfo_all_blocks=1 00:18:49.279 --rc geninfo_unexecuted_blocks=1 00:18:49.279 00:18:49.279 ' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:49.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.279 --rc genhtml_branch_coverage=1 00:18:49.279 --rc genhtml_function_coverage=1 00:18:49.279 --rc genhtml_legend=1 00:18:49.279 --rc geninfo_all_blocks=1 00:18:49.279 --rc geninfo_unexecuted_blocks=1 00:18:49.279 00:18:49.279 ' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:49.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.279 --rc genhtml_branch_coverage=1 00:18:49.279 --rc genhtml_function_coverage=1 00:18:49.279 --rc genhtml_legend=1 00:18:49.279 --rc geninfo_all_blocks=1 00:18:49.279 --rc geninfo_unexecuted_blocks=1 00:18:49.279 00:18:49.279 ' 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:49.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89938 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:49.279 18:00:07 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89938 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 89938 ']' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.279 18:00:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:49.279 [2024-10-25 18:00:07.621947] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:49.279 [2024-10-25 18:00:07.622173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89938 ] 00:18:49.537 [2024-10-25 18:00:07.788166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.537 [2024-10-25 18:00:07.935187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 Malloc0 00:18:50.911 Malloc1 00:18:50.911 Malloc2 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "ef0903e9-5a1a-4d10-9e23-8ba790294442"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ef0903e9-5a1a-4d10-9e23-8ba790294442",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "ef0903e9-5a1a-4d10-9e23-8ba790294442",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1a2b0422-bc18-4e55-87e6-acc9fb2c2b56",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "08b629dc-5421-46ee-90fd-f064dc729f24",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "68a9d0ce-fafd-4da8-a539-968fa5f521f0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:50.911 18:00:09 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89938 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 89938 ']' 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 89938 00:18:50.911 18:00:09 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89938 00:18:50.912 killing process with pid 89938 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89938' 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 89938 00:18:50.912 18:00:09 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 89938 00:18:54.192 18:00:12 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:54.192 18:00:12 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:54.192 18:00:12 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:54.192 18:00:12 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:54.192 18:00:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.192 ************************************ 00:18:54.192 START TEST bdev_hello_world 00:18:54.192 ************************************ 00:18:54.192 18:00:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:54.192 [2024-10-25 18:00:12.332735] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:54.192 [2024-10-25 18:00:12.332996] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90011 ] 00:18:54.192 [2024-10-25 18:00:12.488591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.192 [2024-10-25 18:00:12.609151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.760 [2024-10-25 18:00:13.130729] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:54.760 [2024-10-25 18:00:13.130786] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:54.760 [2024-10-25 18:00:13.130803] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:54.760 [2024-10-25 18:00:13.131336] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:54.760 [2024-10-25 18:00:13.131480] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:54.760 [2024-10-25 18:00:13.131497] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:54.760 [2024-10-25 18:00:13.131545] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:54.760 00:18:54.760 [2024-10-25 18:00:13.131563] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:56.138 ************************************ 00:18:56.138 END TEST bdev_hello_world 00:18:56.138 00:18:56.138 real 0m2.288s 00:18:56.138 user 0m1.925s 00:18:56.138 sys 0m0.240s 00:18:56.138 18:00:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.138 18:00:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:56.138 ************************************ 00:18:56.397 18:00:14 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:56.397 18:00:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:56.397 18:00:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.397 18:00:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.397 ************************************ 00:18:56.397 START TEST bdev_bounds 00:18:56.397 ************************************ 00:18:56.397 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:56.397 18:00:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90053 00:18:56.397 18:00:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90053' 00:18:56.398 Process bdevio pid: 90053 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90053 00:18:56.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90053 ']' 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.398 18:00:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:56.398 [2024-10-25 18:00:14.678983] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:56.398 [2024-10-25 18:00:14.679191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90053 ] 00:18:56.657 [2024-10-25 18:00:14.857092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:56.657 [2024-10-25 18:00:14.976664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.657 [2024-10-25 18:00:14.976813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.657 [2024-10-25 18:00:14.976900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.226 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.226 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:57.226 18:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:57.226 I/O targets: 00:18:57.226 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:57.226 00:18:57.226 00:18:57.226 CUnit - A unit testing framework for C - Version 2.1-3 00:18:57.226 http://cunit.sourceforge.net/ 00:18:57.226 00:18:57.226 00:18:57.226 Suite: bdevio tests on: raid5f 00:18:57.226 Test: blockdev write read block ...passed 00:18:57.226 Test: blockdev write zeroes read block ...passed 00:18:57.487 Test: blockdev write zeroes read no split ...passed 00:18:57.487 Test: blockdev write zeroes read split ...passed 00:18:57.487 Test: blockdev write zeroes read split partial ...passed 00:18:57.487 Test: blockdev reset ...passed 00:18:57.487 Test: blockdev write read 8 blocks ...passed 00:18:57.487 Test: blockdev write read size > 128k ...passed 00:18:57.487 Test: blockdev write read invalid size ...passed 00:18:57.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:57.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:57.487 Test: blockdev write read max offset ...passed 00:18:57.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:57.487 Test: blockdev writev readv 8 blocks ...passed 00:18:57.487 Test: blockdev writev readv 30 x 1block ...passed 00:18:57.487 Test: blockdev writev readv block ...passed 00:18:57.487 Test: blockdev writev readv size > 128k ...passed 00:18:57.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:57.487 Test: blockdev comparev and writev ...passed 00:18:57.487 Test: blockdev nvme passthru rw ...passed 00:18:57.487 Test: blockdev nvme passthru vendor specific ...passed 00:18:57.487 Test: blockdev nvme admin passthru ...passed 00:18:57.487 Test: blockdev copy ...passed 00:18:57.487 00:18:57.487 Run Summary: Type Total Ran Passed Failed Inactive 00:18:57.487 suites 1 1 n/a 0 0 00:18:57.487 tests 23 23 23 0 0 00:18:57.487 asserts 130 130 130 0 n/a 00:18:57.487 00:18:57.487 Elapsed time = 0.640 seconds 00:18:57.487 0 00:18:57.746 18:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90053 00:18:57.746 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90053 ']' 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90053 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90053 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.747 killing process with pid 90053 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90053' 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90053 00:18:57.747 18:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90053 00:18:59.124 18:00:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:59.124 00:18:59.124 real 0m2.796s 00:18:59.124 user 0m6.978s 00:18:59.124 sys 0m0.397s 00:18:59.124 ************************************ 00:18:59.124 END TEST bdev_bounds 00:18:59.124 ************************************ 00:18:59.124 18:00:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.124 18:00:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:59.124 18:00:17 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:59.124 18:00:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:59.124 18:00:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.124 18:00:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.124 ************************************ 00:18:59.124 START TEST bdev_nbd 00:18:59.124 ************************************ 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90118 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90118 /var/tmp/spdk-nbd.sock 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90118 ']' 00:18:59.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.124 18:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:59.124 [2024-10-25 18:00:17.551002] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:59.124 [2024-10-25 18:00:17.551209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.384 [2024-10-25 18:00:17.727453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.644 [2024-10-25 18:00:17.841279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.232 1+0 records in 00:19:00.232 1+0 records out 00:19:00.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562461 s, 7.3 MB/s 00:19:00.232 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:00.492 { 00:19:00.492 "nbd_device": "/dev/nbd0", 00:19:00.492 "bdev_name": "raid5f" 00:19:00.492 } 00:19:00.492 ]' 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:00.492 { 00:19:00.492 "nbd_device": "/dev/nbd0", 00:19:00.492 "bdev_name": "raid5f" 00:19:00.492 } 00:19:00.492 ]' 00:19:00.492 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.753 18:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:00.753 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.754 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:00.754 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:00.754 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.014 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:01.274 /dev/nbd0 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.274 1+0 records in 00:19:01.274 1+0 records out 00:19:01.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524958 s, 7.8 MB/s 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.274 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:01.534 { 00:19:01.534 "nbd_device": "/dev/nbd0", 00:19:01.534 "bdev_name": "raid5f" 00:19:01.534 } 00:19:01.534 ]' 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:01.534 { 00:19:01.534 "nbd_device": "/dev/nbd0", 00:19:01.534 "bdev_name": "raid5f" 00:19:01.534 } 00:19:01.534 ]' 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:01.534 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:01.794 256+0 records in 00:19:01.794 256+0 records out 00:19:01.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125709 s, 83.4 MB/s 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:01.794 18:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:01.794 256+0 records in 00:19:01.794 256+0 records out 00:19:01.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029327 s, 35.8 MB/s 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.794 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.054 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:02.055 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:02.055 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:02.055 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:02.314 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:02.314 malloc_lvol_verify 00:19:02.572 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:02.572 a4f72453-6d73-4958-8850-12a35a0ac661 00:19:02.572 18:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:02.831 b6035090-115f-49c0-81d8-ca3f6b0bac8f 00:19:02.831 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:03.091 /dev/nbd0 00:19:03.091 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:03.091 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:03.091 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:03.091 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:03.091 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:03.091 mke2fs 1.47.0 (5-Feb-2023) 00:19:03.092 Discarding device blocks: 0/4096 done 00:19:03.092 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:03.092 00:19:03.092 Allocating group tables: 0/1 done 00:19:03.092 Writing inode tables: 0/1 done 00:19:03.092 Creating journal (1024 blocks): done 00:19:03.092 Writing superblocks and filesystem accounting information: 0/1 done 00:19:03.092 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.092 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90118 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90118 ']' 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90118 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90118 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.351 killing process with pid 90118 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90118' 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90118 00:19:03.351 18:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90118 00:19:04.732 18:00:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:04.732 00:19:04.732 real 0m5.697s 00:19:04.732 user 0m7.760s 00:19:04.732 sys 0m1.294s 00:19:04.732 18:00:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.732 18:00:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:04.732 ************************************ 00:19:04.732 END TEST bdev_nbd 00:19:04.732 ************************************ 00:19:04.993 18:00:23 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:04.993 18:00:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:04.993 18:00:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:04.993 18:00:23 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:04.993 18:00:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:04.993 18:00:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.993 18:00:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.993 ************************************ 00:19:04.993 START TEST bdev_fio 00:19:04.993 ************************************ 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:04.993 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:04.993 ************************************ 00:19:04.993 START TEST bdev_fio_rw_verify 00:19:04.993 ************************************ 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:04.993 18:00:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:05.253 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:05.253 fio-3.35 00:19:05.253 Starting 1 thread 00:19:17.470 00:19:17.470 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90314: Fri Oct 25 18:00:34 2024 00:19:17.470 read: IOPS=10.8k, BW=42.0MiB/s (44.0MB/s)(420MiB/10001msec) 00:19:17.470 slat (usec): min=17, max=102, avg=22.14, stdev= 3.12 00:19:17.470 clat (usec): min=11, max=386, avg=147.90, stdev=54.42 00:19:17.470 lat (usec): min=31, max=412, avg=170.04, stdev=55.18 00:19:17.470 clat percentiles (usec): 00:19:17.470 | 50.000th=[ 145], 99.000th=[ 269], 99.900th=[ 297], 99.990th=[ 334], 00:19:17.470 | 99.999th=[ 371] 00:19:17.470 write: IOPS=11.3k, BW=44.2MiB/s (46.3MB/s)(436MiB/9877msec); 0 zone resets 00:19:17.470 slat (usec): min=7, max=213, avg=18.96, stdev= 4.29 00:19:17.470 clat (usec): min=63, max=1316, avg=338.83, stdev=53.86 00:19:17.470 lat (usec): min=79, max=1530, avg=357.79, stdev=55.50 00:19:17.470 clat percentiles (usec): 00:19:17.470 | 50.000th=[ 338], 99.000th=[ 465], 99.900th=[ 553], 99.990th=[ 1123], 00:19:17.470 | 99.999th=[ 1254] 00:19:17.470 bw ( KiB/s): min=42104, max=51248, per=98.43%, avg=44519.58, stdev=2229.04, samples=19 00:19:17.470 iops : min=10526, max=12812, avg=11129.89, stdev=557.26, samples=19 00:19:17.470 lat (usec) : 20=0.01%, 50=0.01%, 100=11.64%, 250=38.40%, 500=49.83% 00:19:17.470 lat (usec) : 750=0.10%, 1000=0.01% 00:19:17.470 lat (msec) : 2=0.01% 00:19:17.470 cpu : usr=98.94%, sys=0.44%, ctx=21, majf=0, minf=9004 00:19:17.470 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.470 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.470 issued rwts: total=107532,111681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:17.470 00:19:17.470 Run status group 0 (all jobs): 00:19:17.470 READ: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=420MiB (440MB), run=10001-10001msec 00:19:17.470 WRITE: bw=44.2MiB/s (46.3MB/s), 44.2MiB/s-44.2MiB/s (46.3MB/s-46.3MB/s), io=436MiB (457MB), run=9877-9877msec 00:19:17.730 ----------------------------------------------------- 00:19:17.730 Suppressions used: 00:19:17.730 count bytes template 00:19:17.730 1 7 /usr/src/fio/parse.c 00:19:17.730 802 76992 /usr/src/fio/iolog.c 00:19:17.730 1 8 libtcmalloc_minimal.so 00:19:17.730 1 904 libcrypto.so 00:19:17.730 ----------------------------------------------------- 00:19:17.730 00:19:17.990 00:19:17.990 real 0m12.805s 00:19:17.990 user 0m12.910s 00:19:17.990 sys 0m0.708s 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:17.990 ************************************ 00:19:17.990 END TEST bdev_fio_rw_verify 00:19:17.990 ************************************ 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:17.990 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "ef0903e9-5a1a-4d10-9e23-8ba790294442"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ef0903e9-5a1a-4d10-9e23-8ba790294442",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "ef0903e9-5a1a-4d10-9e23-8ba790294442",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1a2b0422-bc18-4e55-87e6-acc9fb2c2b56",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "08b629dc-5421-46ee-90fd-f064dc729f24",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "68a9d0ce-fafd-4da8-a539-968fa5f521f0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.991 /home/vagrant/spdk_repo/spdk 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:17.991 00:19:17.991 real 0m13.095s 00:19:17.991 user 0m13.025s 00:19:17.991 sys 0m0.844s 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.991 18:00:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:17.991 ************************************ 00:19:17.991 END TEST bdev_fio 00:19:17.991 ************************************ 00:19:17.991 18:00:36 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:17.991 18:00:36 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:17.991 18:00:36 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:17.991 18:00:36 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.991 18:00:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:17.991 ************************************ 00:19:17.991 START TEST bdev_verify 00:19:17.991 ************************************ 00:19:17.991 18:00:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:18.251 [2024-10-25 18:00:36.462673] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:18.251 [2024-10-25 18:00:36.462791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90478 ] 00:19:18.251 [2024-10-25 18:00:36.636318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.513 [2024-10-25 18:00:36.753717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.513 [2024-10-25 18:00:36.753750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.083 Running I/O for 5 seconds... 00:19:20.950 13322.00 IOPS, 52.04 MiB/s [2024-10-25T18:00:40.323Z] 13610.50 IOPS, 53.17 MiB/s [2024-10-25T18:00:41.701Z] 14045.67 IOPS, 54.87 MiB/s [2024-10-25T18:00:42.640Z] 14263.25 IOPS, 55.72 MiB/s [2024-10-25T18:00:42.640Z] 14079.00 IOPS, 55.00 MiB/s 00:19:24.204 Latency(us) 00:19:24.204 [2024-10-25T18:00:42.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.204 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:24.204 Verification LBA range: start 0x0 length 0x2000 00:19:24.204 raid5f : 5.02 6986.25 27.29 0.00 0.00 27608.81 187.81 23123.62 00:19:24.204 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:24.204 Verification LBA range: start 0x2000 length 0x2000 00:19:24.204 raid5f : 5.02 7099.87 27.73 0.00 0.00 27049.50 343.42 22551.25 00:19:24.204 [2024-10-25T18:00:42.640Z] =================================================================================================================== 00:19:24.204 [2024-10-25T18:00:42.640Z] Total : 14086.12 55.02 0.00 0.00 27326.92 187.81 23123.62 00:19:25.656 00:19:25.656 real 0m7.363s 00:19:25.656 user 0m13.616s 00:19:25.656 sys 0m0.279s 00:19:25.656 18:00:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.656 18:00:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:25.656 ************************************ 00:19:25.656 END TEST bdev_verify 00:19:25.656 ************************************ 00:19:25.656 18:00:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:25.656 18:00:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:25.656 18:00:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.656 18:00:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.656 ************************************ 00:19:25.656 START TEST bdev_verify_big_io 00:19:25.656 ************************************ 00:19:25.656 18:00:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:25.656 [2024-10-25 18:00:43.895290] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:25.656 [2024-10-25 18:00:43.895412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90571 ] 00:19:25.656 [2024-10-25 18:00:44.070946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:25.915 [2024-10-25 18:00:44.193331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.915 [2024-10-25 18:00:44.193368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.484 Running I/O for 5 seconds... 00:19:28.431 693.00 IOPS, 43.31 MiB/s [2024-10-25T18:00:48.249Z] 760.00 IOPS, 47.50 MiB/s [2024-10-25T18:00:49.195Z] 761.33 IOPS, 47.58 MiB/s [2024-10-25T18:00:50.133Z] 761.00 IOPS, 47.56 MiB/s [2024-10-25T18:00:50.133Z] 761.60 IOPS, 47.60 MiB/s 00:19:31.697 Latency(us) 00:19:31.697 [2024-10-25T18:00:50.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.697 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:31.697 Verification LBA range: start 0x0 length 0x200 00:19:31.697 raid5f : 5.21 377.30 23.58 0.00 0.00 8326018.63 144.88 364483.19 00:19:31.697 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:31.697 Verification LBA range: start 0x200 length 0x200 00:19:31.697 raid5f : 5.21 377.57 23.60 0.00 0.00 8299375.17 275.45 360820.04 00:19:31.697 [2024-10-25T18:00:50.133Z] =================================================================================================================== 00:19:31.697 [2024-10-25T18:00:50.133Z] Total : 754.87 47.18 0.00 0.00 8312696.90 144.88 364483.19 00:19:33.601 00:19:33.601 real 0m7.776s 00:19:33.601 user 0m14.362s 00:19:33.601 sys 0m0.275s 00:19:33.601 18:00:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.601 18:00:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.601 ************************************ 00:19:33.601 END TEST bdev_verify_big_io 00:19:33.601 ************************************ 00:19:33.601 18:00:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.601 18:00:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:33.601 18:00:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.601 18:00:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.601 ************************************ 00:19:33.601 START TEST bdev_write_zeroes 00:19:33.601 ************************************ 00:19:33.601 18:00:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.601 [2024-10-25 18:00:51.718832] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:33.601 [2024-10-25 18:00:51.718969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90675 ] 00:19:33.601 [2024-10-25 18:00:51.894613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.859 [2024-10-25 18:00:52.088993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.426 Running I/O for 1 seconds... 00:19:35.361 23919.00 IOPS, 93.43 MiB/s 00:19:35.361 Latency(us) 00:19:35.361 [2024-10-25T18:00:53.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.361 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:35.361 raid5f : 1.01 23899.73 93.36 0.00 0.00 5337.62 1810.11 7269.06 00:19:35.361 [2024-10-25T18:00:53.797Z] =================================================================================================================== 00:19:35.361 [2024-10-25T18:00:53.797Z] Total : 23899.73 93.36 0.00 0.00 5337.62 1810.11 7269.06 00:19:36.737 00:19:36.737 real 0m3.436s 00:19:36.737 user 0m3.016s 00:19:36.737 sys 0m0.287s 00:19:36.737 18:00:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.737 18:00:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:36.737 ************************************ 00:19:36.737 END TEST bdev_write_zeroes 00:19:36.737 ************************************ 00:19:36.737 18:00:55 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:36.737 18:00:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:36.737 18:00:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.737 18:00:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:36.737 ************************************ 00:19:36.737 START TEST bdev_json_nonenclosed 00:19:36.737 ************************************ 00:19:36.737 18:00:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:36.996 [2024-10-25 18:00:55.232630] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:36.996 [2024-10-25 18:00:55.232766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90728 ] 00:19:36.996 [2024-10-25 18:00:55.407386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.256 [2024-10-25 18:00:55.526673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.256 [2024-10-25 18:00:55.526794] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:37.256 [2024-10-25 18:00:55.526836] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:37.256 [2024-10-25 18:00:55.526850] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:37.516 00:19:37.516 real 0m0.652s 00:19:37.516 user 0m0.417s 00:19:37.516 sys 0m0.130s 00:19:37.516 18:00:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.516 18:00:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:37.516 ************************************ 00:19:37.516 END TEST bdev_json_nonenclosed 00:19:37.516 ************************************ 00:19:37.516 18:00:55 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:37.516 18:00:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:37.516 18:00:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.516 18:00:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:37.516 ************************************ 00:19:37.516 START TEST bdev_json_nonarray 00:19:37.516 ************************************ 00:19:37.516 18:00:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:37.516 [2024-10-25 18:00:55.945194] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:37.516 [2024-10-25 18:00:55.945312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90758 ] 00:19:37.775 [2024-10-25 18:00:56.121018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.034 [2024-10-25 18:00:56.237708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.034 [2024-10-25 18:00:56.237840] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:38.034 [2024-10-25 18:00:56.237863] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:38.034 [2024-10-25 18:00:56.237885] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:38.294 00:19:38.294 real 0m0.641s 00:19:38.294 user 0m0.406s 00:19:38.294 sys 0m0.128s 00:19:38.294 18:00:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.294 18:00:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 ************************************ 00:19:38.294 END TEST bdev_json_nonarray 00:19:38.294 ************************************ 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:38.294 18:00:56 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:38.294 00:19:38.294 real 0m49.243s 00:19:38.294 user 1m6.487s 00:19:38.294 sys 0m4.907s 00:19:38.294 18:00:56 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.294 18:00:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 ************************************ 00:19:38.294 END TEST blockdev_raid5f 00:19:38.294 ************************************ 00:19:38.294 18:00:56 -- spdk/autotest.sh@194 -- # uname -s 00:19:38.294 18:00:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:38.294 18:00:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.294 18:00:56 -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 18:00:56 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:38.294 18:00:56 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:38.294 18:00:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:38.294 18:00:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:38.294 18:00:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.294 18:00:56 -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 18:00:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:38.294 18:00:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:38.294 18:00:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:38.294 18:00:56 -- common/autotest_common.sh@10 -- # set +x 00:19:40.250 INFO: APP EXITING 00:19:40.250 INFO: killing all VMs 00:19:40.250 INFO: killing vhost app 00:19:40.250 INFO: EXIT DONE 00:19:40.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.818 Waiting for block devices as requested 00:19:40.818 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.076 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:42.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.036 Cleaning 00:19:42.036 Removing: /var/run/dpdk/spdk0/config 00:19:42.036 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:42.036 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:42.036 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:42.036 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:42.036 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:42.036 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:42.036 Removing: /dev/shm/spdk_tgt_trace.pid56831 00:19:42.036 Removing: /var/run/dpdk/spdk0 00:19:42.036 Removing: /var/run/dpdk/spdk_pid56596 00:19:42.036 Removing: /var/run/dpdk/spdk_pid56831 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57060 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57164 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57209 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57348 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57366 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57571 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57682 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57789 00:19:42.036 Removing: /var/run/dpdk/spdk_pid57906 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58014 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58053 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58090 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58166 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58283 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58730 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58805 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58874 00:19:42.036 Removing: /var/run/dpdk/spdk_pid58894 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59034 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59056 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59194 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59212 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59286 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59305 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59371 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59389 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59584 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59621 00:19:42.036 Removing: /var/run/dpdk/spdk_pid59710 00:19:42.036 Removing: /var/run/dpdk/spdk_pid61030 00:19:42.036 Removing: /var/run/dpdk/spdk_pid61236 00:19:42.036 Removing: /var/run/dpdk/spdk_pid61382 00:19:42.036 Removing: /var/run/dpdk/spdk_pid62014 00:19:42.036 Removing: /var/run/dpdk/spdk_pid62231 00:19:42.036 Removing: /var/run/dpdk/spdk_pid62371 00:19:42.036 Removing: /var/run/dpdk/spdk_pid63003 00:19:42.036 Removing: /var/run/dpdk/spdk_pid63328 00:19:42.036 Removing: /var/run/dpdk/spdk_pid63468 00:19:42.036 Removing: /var/run/dpdk/spdk_pid64844 00:19:42.036 Removing: /var/run/dpdk/spdk_pid65097 00:19:42.036 Removing: /var/run/dpdk/spdk_pid65243 00:19:42.036 Removing: /var/run/dpdk/spdk_pid66621 00:19:42.036 Removing: /var/run/dpdk/spdk_pid66876 00:19:42.036 Removing: /var/run/dpdk/spdk_pid67016 00:19:42.036 Removing: /var/run/dpdk/spdk_pid68396 00:19:42.036 Removing: /var/run/dpdk/spdk_pid68842 00:19:42.036 Removing: /var/run/dpdk/spdk_pid68982 00:19:42.036 Removing: /var/run/dpdk/spdk_pid70452 00:19:42.036 Removing: /var/run/dpdk/spdk_pid70711 00:19:42.036 Removing: /var/run/dpdk/spdk_pid70861 00:19:42.036 Removing: /var/run/dpdk/spdk_pid72344 00:19:42.036 Removing: /var/run/dpdk/spdk_pid72614 00:19:42.036 Removing: /var/run/dpdk/spdk_pid72761 00:19:42.036 Removing: /var/run/dpdk/spdk_pid74254 00:19:42.036 Removing: /var/run/dpdk/spdk_pid74751 00:19:42.036 Removing: /var/run/dpdk/spdk_pid74904 00:19:42.294 Removing: /var/run/dpdk/spdk_pid75053 00:19:42.294 Removing: /var/run/dpdk/spdk_pid75499 00:19:42.294 Removing: /var/run/dpdk/spdk_pid76251 00:19:42.294 Removing: /var/run/dpdk/spdk_pid76656 00:19:42.294 Removing: /var/run/dpdk/spdk_pid77358 00:19:42.294 Removing: /var/run/dpdk/spdk_pid77828 00:19:42.294 Removing: /var/run/dpdk/spdk_pid78627 00:19:42.294 Removing: /var/run/dpdk/spdk_pid79066 00:19:42.294 Removing: /var/run/dpdk/spdk_pid81043 00:19:42.294 Removing: /var/run/dpdk/spdk_pid81487 00:19:42.294 Removing: /var/run/dpdk/spdk_pid81922 00:19:42.295 Removing: /var/run/dpdk/spdk_pid84021 00:19:42.295 Removing: /var/run/dpdk/spdk_pid84505 00:19:42.295 Removing: /var/run/dpdk/spdk_pid85008 00:19:42.295 Removing: /var/run/dpdk/spdk_pid86076 00:19:42.295 Removing: /var/run/dpdk/spdk_pid86404 00:19:42.295 Removing: /var/run/dpdk/spdk_pid87358 00:19:42.295 Removing: /var/run/dpdk/spdk_pid87688 00:19:42.295 Removing: /var/run/dpdk/spdk_pid88636 00:19:42.295 Removing: /var/run/dpdk/spdk_pid88966 00:19:42.295 Removing: /var/run/dpdk/spdk_pid89651 00:19:42.295 Removing: /var/run/dpdk/spdk_pid89938 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90011 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90053 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90299 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90478 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90571 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90675 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90728 00:19:42.295 Removing: /var/run/dpdk/spdk_pid90758 00:19:42.295 Clean 00:19:42.295 18:01:00 -- common/autotest_common.sh@1449 -- # return 0 00:19:42.295 18:01:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:42.295 18:01:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.295 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:19:42.295 18:01:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:42.295 18:01:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.295 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:19:42.552 18:01:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:42.552 18:01:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:42.552 18:01:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:42.552 18:01:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:42.552 18:01:00 -- spdk/autotest.sh@394 -- # hostname 00:19:42.552 18:01:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:42.552 geninfo: WARNING: invalid characters removed from testname! 00:20:09.097 18:01:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:09.097 18:01:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.476 18:01:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.385 18:01:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.918 18:01:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.453 18:01:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.996 18:01:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:19.996 18:01:37 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:20:19.996 18:01:37 -- common/autotest_common.sh@1689 -- $ lcov --version 00:20:19.996 18:01:37 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:20:19.996 18:01:37 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:20:19.996 18:01:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:19.996 18:01:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:19.996 18:01:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:19.996 18:01:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:19.996 18:01:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:19.996 18:01:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:19.996 18:01:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:19.996 18:01:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:19.996 18:01:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:19.996 18:01:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:19.996 18:01:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:19.996 18:01:37 -- scripts/common.sh@344 -- $ case "$op" in 00:20:19.996 18:01:37 -- scripts/common.sh@345 -- $ : 1 00:20:19.996 18:01:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:19.996 18:01:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.996 18:01:37 -- scripts/common.sh@365 -- $ decimal 1 00:20:19.996 18:01:37 -- scripts/common.sh@353 -- $ local d=1 00:20:19.996 18:01:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:19.996 18:01:37 -- scripts/common.sh@355 -- $ echo 1 00:20:19.996 18:01:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:19.996 18:01:37 -- scripts/common.sh@366 -- $ decimal 2 00:20:19.996 18:01:38 -- scripts/common.sh@353 -- $ local d=2 00:20:19.997 18:01:38 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:19.997 18:01:38 -- scripts/common.sh@355 -- $ echo 2 00:20:19.997 18:01:38 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:19.997 18:01:38 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:19.997 18:01:38 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:19.997 18:01:38 -- scripts/common.sh@368 -- $ return 0 00:20:19.997 18:01:38 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.997 18:01:38 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:20:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.997 --rc genhtml_branch_coverage=1 00:20:19.997 --rc genhtml_function_coverage=1 00:20:19.997 --rc genhtml_legend=1 00:20:19.997 --rc geninfo_all_blocks=1 00:20:19.997 --rc geninfo_unexecuted_blocks=1 00:20:19.997 00:20:19.997 ' 00:20:19.997 18:01:38 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:20:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.997 --rc genhtml_branch_coverage=1 00:20:19.997 --rc genhtml_function_coverage=1 00:20:19.997 --rc genhtml_legend=1 00:20:19.997 --rc geninfo_all_blocks=1 00:20:19.997 --rc geninfo_unexecuted_blocks=1 00:20:19.997 00:20:19.997 ' 00:20:19.997 18:01:38 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:20:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.997 --rc genhtml_branch_coverage=1 00:20:19.997 --rc genhtml_function_coverage=1 00:20:19.997 --rc genhtml_legend=1 00:20:19.997 --rc geninfo_all_blocks=1 00:20:19.997 --rc geninfo_unexecuted_blocks=1 00:20:19.997 00:20:19.997 ' 00:20:19.997 18:01:38 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:20:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.997 --rc genhtml_branch_coverage=1 00:20:19.997 --rc genhtml_function_coverage=1 00:20:19.997 --rc genhtml_legend=1 00:20:19.997 --rc geninfo_all_blocks=1 00:20:19.997 --rc geninfo_unexecuted_blocks=1 00:20:19.997 00:20:19.997 ' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.997 18:01:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:19.997 18:01:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:19.997 18:01:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.997 18:01:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.997 18:01:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.997 18:01:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.997 18:01:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.997 18:01:38 -- paths/export.sh@5 -- $ export PATH 00:20:19.997 18:01:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.997 18:01:38 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:19.997 18:01:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:20:19.997 18:01:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729879298.XXXXXX 00:20:19.997 18:01:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729879298.x3Mn9F 00:20:19.997 18:01:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:20:19.997 18:01:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:20:19.997 18:01:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:19.997 18:01:38 -- common/autotest_common.sh@10 -- $ set +x 00:20:19.997 18:01:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:19.997 18:01:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:20:19.997 18:01:38 -- pm/common@17 -- $ local monitor 00:20:19.997 18:01:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:19.997 18:01:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:19.997 18:01:38 -- pm/common@25 -- $ sleep 1 00:20:19.997 18:01:38 -- pm/common@21 -- $ date +%s 00:20:19.997 18:01:38 -- pm/common@21 -- $ date +%s 00:20:19.997 18:01:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729879298 00:20:19.997 18:01:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729879298 00:20:19.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729879298_collect-cpu-load.pm.log 00:20:19.997 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729879298_collect-vmstat.pm.log 00:20:20.940 18:01:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:20:20.940 18:01:39 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:20.940 18:01:39 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:20.940 18:01:39 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:20.940 18:01:39 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:20.940 18:01:39 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:20.940 18:01:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:20.940 18:01:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:20.940 18:01:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:20.940 18:01:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:20.940 18:01:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:20.940 18:01:39 -- pm/common@44 -- $ pid=92261 00:20:20.940 18:01:39 -- pm/common@50 -- $ kill -TERM 92261 00:20:20.940 18:01:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:20.940 18:01:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:20.940 18:01:39 -- pm/common@44 -- $ pid=92263 00:20:20.940 18:01:39 -- pm/common@50 -- $ kill -TERM 92263 00:20:20.940 + [[ -n 5419 ]] 00:20:20.940 + sudo kill 5419 00:20:20.949 [Pipeline] } 00:20:20.964 [Pipeline] // timeout 00:20:20.970 [Pipeline] } 00:20:20.984 [Pipeline] // stage 00:20:20.989 [Pipeline] } 00:20:21.004 [Pipeline] // catchError 00:20:21.013 [Pipeline] stage 00:20:21.015 [Pipeline] { (Stop VM) 00:20:21.027 [Pipeline] sh 00:20:21.309 + vagrant halt 00:20:24.595 ==> default: Halting domain... 00:20:32.722 [Pipeline] sh 00:20:33.010 + vagrant destroy -f 00:20:36.299 ==> default: Removing domain... 00:20:36.312 [Pipeline] sh 00:20:36.597 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:36.606 [Pipeline] } 00:20:36.620 [Pipeline] // stage 00:20:36.625 [Pipeline] } 00:20:36.639 [Pipeline] // dir 00:20:36.644 [Pipeline] } 00:20:36.658 [Pipeline] // wrap 00:20:36.665 [Pipeline] } 00:20:36.678 [Pipeline] // catchError 00:20:36.688 [Pipeline] stage 00:20:36.690 [Pipeline] { (Epilogue) 00:20:36.703 [Pipeline] sh 00:20:36.985 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:43.561 [Pipeline] catchError 00:20:43.563 [Pipeline] { 00:20:43.576 [Pipeline] sh 00:20:43.861 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:43.861 Artifacts sizes are good 00:20:43.872 [Pipeline] } 00:20:43.886 [Pipeline] // catchError 00:20:43.896 [Pipeline] archiveArtifacts 00:20:43.904 Archiving artifacts 00:20:44.005 [Pipeline] cleanWs 00:20:44.016 [WS-CLEANUP] Deleting project workspace... 00:20:44.016 [WS-CLEANUP] Deferred wipeout is used... 00:20:44.022 [WS-CLEANUP] done 00:20:44.024 [Pipeline] } 00:20:44.042 [Pipeline] // stage 00:20:44.046 [Pipeline] } 00:20:44.058 [Pipeline] // node 00:20:44.064 [Pipeline] End of Pipeline 00:20:44.098 Finished: SUCCESS